Demonstrating traffic flow from an instance to the Internet

To demonstrate the process of creating and connecting standalone Neutron routers to both tenant and external provider networks so as to provide network connectivity to instances, this section of the chapter is dedicated to a walkthrough that leverages the fundamental Neutron concepts that have been discussed in the book so far.

A VLAN provider network will be created and used as an external gateway network for the Neutron router, while a VLAN tenant network will be created and used by instances. A Neutron router will be created and used to route traffic from the instances in the tenant network to the Internet, and floating IPs will be created and used to provide direct connectivity to instances.

Setting the foundation

In this demonstration, a Cisco Adaptive Security Appliance (ASA) device serves as the physical network gateway device and is connected to the Internet. The inside interface of the Cisco ASA device has a configured IP address of 10.50.0.1/24 on VLAN 50 and will serve as the gateway for an external VLAN provider network created in the following section.

The following figure is the logical diagram of the network to be built as part of this demonstration:

Setting the foundation

Figure 7.5

In the preceding figure, a Cisco ASA device serves as the external network device in front of the OpenStack Cloud.

Creating an external provider network

In order to provide instances with external connectivity, a Neutron router must be connected to a provider network that is eligible for use as an external network.

Using the Neutron net-create command, create a provider network with the following attributes:

  • Name: GATEWAY_NET
  • Type: VLAN
  • Segmentation ID: 50
  • Physical network: physnet2
  • External: True
  • Shared: True

The following screenshot displays the resulting output of the net-create command:

Creating an external provider network

Figure 7.6

Using the Neutron subnet-create command, create a subnet with the following attributes:

  • Name: GATEWAY_SUBNET
  • Network: 10.50.0.0
  • Subnet mask: 255.255.255.0
  • Gateway: 10.50.0.1
  • DHCP: Disabled
  • Allocation pool: 10.50.0.100 - 10.50.0.254

The following screenshot displays the resulting output of the subnet-create command:

Creating an external provider network

Figure 7.7

Creating a Neutron router

Create a router using the Neutron router-create command with the following attribute:

  • Name: MyRouter

The following screenshot displays the resulting output of the router-create command:

Creating a Neutron router

Figure 7.8

Attaching the router to the external network

When attaching a Neutron router to a provider network, the network must have its router:external attribute set to true to be eligible for use as an external network.

Using the following Neutron router-gateway-set command, attach the MyRouter router to the GATEWAY_NET network:

(neutron) router-gateway-set MyRouter GATEWAY_NET
Set gateway for router MyRouter

Using the Neutron router-port-list command, determine the external IP of the router, as follows:

Attaching the router to the external network

Figure 7.9

Note

The IP address assigned to the router is procured from the allocation pool of the external network's subnet. As of the Kilo release of OpenStack, there is no way to specify the external address of the router.

Identifying the L3 agent and namespace

Once the gateway interface has been added, the router will be scheduled to an eligible L3 agent. Using the Neutron l3-agent-list-hosting-router command, you can determine which L3 agent the router was scheduled to, as follows:

Identifying the L3 agent and namespace

Figure 7.10

The L3 agent is responsible for creating a network namespace that acts as the router. For easy identification, the name of the namespace incorporates the router's ID:

root@controller01:~# ip netns
qrouter-bc2a45f6-db3b-4ca4-a5ee-340af36a8293

Inside the namespace, you will find an interface with the prefix qg. The qg interface is the gateway or external interface of the router. Neutron automatically provides the qg interface with an IP address from the allocation pool of the external network's subnet:

Identifying the L3 agent and namespace

Figure 7.11

When using the Open vSwitch driver, the interface is connected directly to the integration bridge. When using the LinuxBridge driver, as in this example, the qg interface is one end of a veth pair whose other end is connected to a bridge on the host. Using ethtool, we can determine the peer index of the corresponding interface on the host:

Identifying the L3 agent and namespace

Figure 7.12

Using ip link show, the corresponding interface (peer index 23) can be found by searching for the index on the controller:

Identifying the L3 agent and namespace

Figure 7.13

When using the LinuxBridge driver, the veth interface is connected to a bridge that corresponds to the external network:

Identifying the L3 agent and namespace

Figure 7.14

Tip

For easy identification, the bridge name includes the first 10 characters of the network ID. In addition, each end of the veth pair includes the first 10 characters of the port ID that is associated with the interface.

The namespace can communicate with other devices in the same subnet through the bridge. The other interface in the bridge, eth2.50, tags traffic as VLAN 50 as it exits the bridge out the physical interface eth2.

Observe the route table within the namespace. The default gateway address corresponds to the address defined in the subnet's gateway_ip attribute. In this case, it is 10.50.0.1:

Identifying the L3 agent and namespace

Figure 7.15

Testing gateway connectivity

To test external connectivity from the Neutron router, ping the edge gateway device from within the router namespace:

Testing gateway connectivity

Figure 7.16

Successful ping attempts from the router namespace to the physical gateway device demonstrate a proper external VLAN configuration on both hardware- and software-based networking components.

Creating an internal network

Within the admin tenant, create an internal network for instances. In this demonstration, a network will be created with the following attribute:

  • Name: TENANT_NET

The following screenshot demonstrates the resulting output of the net-create command:

Creating an internal network

Figure 7.17

Note how Neutron automatically determines the network type, physical network, and segmentation ID for the network. As the net-create command was executed without specific provider attributes, Neutron relied on the configuration found in the plugin configuration file to determine the type of network to create.

The following configuration options in the ML2 configuration file were used to determine the network type, physical network, and segmentation ID:

tenant_network_types = vlan,vxlan
network_vlan_ranges = physnet2:30:33

Neutron consumes all available VLAN segmentation IDs as networks are created before moving on to VXLAN networks.

Using the Neutron subnet-create command, create a subnet with the following attributes:

  • Name: TENANT_SUBNET
  • Network: 10.30.0.0
  • Subnet mask: 255.255.255.0
  • Gateway: <auto>
  • DHCP range: <auto>
  • DNS nameserver: 8.8.8.8

The following screenshot demonstrates the resulting output of the subnet-create command:

Creating an internal network

Figure 7.18

Attaching the router to the internal network

Using the Neutron router-interface-add command, attach the TENANT_SUBNET subnet to MyRouter:

(neutron) router-interface-add MyRouter TENANT_SUBNET
Added interface 24b55e34-227f-4fe4-b341-35ff2f49a099 to router MyRouter.

Using the Neutron router-port-list command, determine the internal IP of the router:

Attaching the router to the internal network

Figure 7.19

When a particular port ID is not specified while using the router-interface-add command, the IP address assigned to the internal router interface defaults to the address set in the gateway_ip attribute of the subnet.

Inside the router namespace, a new interface has been added with a prefix of qr. A qr interface is an internal interface of the router:

Attaching the router to the internal network

Figure 7.20

When using the Open vSwitch driver, the interface is connected directly to the integration bridge. When using the LinuxBridge driver, as in this example, every qr interface is at one end of a veth pair, whose other end is connected to a bridge on the host. When using the LinuxBridge driver, the interface is connected to a bridge that corresponds to a tenant network:

Attaching the router to the internal network

Figure 7.21

Tip

For easy identification, the bridge name includes the first 10 characters of the network ID. In addition, each end of the veth pair includes the first 10 characters of the port ID associated with the interface.

The router namespace can communicate with other devices in the same subnet through the bridge. The eth2.30 interface in the bridge tags traffic as VLAN 30 as it exits the bridge out the eth2 parent interface.

Creating instances

Create two instances with the following characteristics:

  • Name: MyInstance-1, MyInstance-2
  • Network: TENANT_NET
  • Image: CirrOS
  • Flavor: m1.tiny

Use the glance image-list command to determine the ID of the CirrOS image downloaded in Chapter 2, Installing OpenStack:

Creating instances

Figure 7.22

Use the following nova boot command to boot two instances in the TENANT_NET network:

Creating instances

Figure 7.23

The nova list command can be used to return a list of instances and their IP addresses:

Creating instances

Figure 7.24

On one or both of the compute nodes, depending on where the instances were scheduled, a Linux bridge is created that corresponds to the TENANT_NET network. Connected to the bridge, we can find a VLAN interface and one or more tap interfaces that correspond to the instances:

Creating instances

Figure 7.25

Verifying instance connectivity

When a network and subnet are created with DHCP enabled, a network namespace is created by the neutron-dhcp-agent service, which serves as a DHCP server for the network. On the host running the neutron-dhcp-agent service, ip netns can be used to reveal the namespace:

Verifying instance connectivity

Figure 7.26

For easy identification, the name of a DHCP namespace corresponds to the ID of the network it is serving. Inside the namespace, an interface with the prefix ns is created and assigned an address from the allocation pool of the subnet:

Verifying instance connectivity

Figure 7.27

When using the Open vSwitch driver, the ns interface is connected directly to the integration bridge. When using the LinuxBridge driver, as in this example, the ns interface is at one end of a veth pair, whose other end is connected to a bridge on the host. The namespace can communicate with other devices in the same subnet through the bridge. When using the LinuxBridge driver, the interface is connected to a bridge that corresponds to the network.

As the instances came online, they sent a DHCP request that was served by the dnsmasq process in the DHCP namespace. A populated ARP table within the namespace confirms that the instances are functioning in the VLAN on layer 2:

Verifying instance connectivity

Figure 7.28

Note

A populated ARP table can only be used to verify connectivity when the l2population driver is not in use. The l2population driver is used to prepopulate the ARP and forward tables to reduce the overhead on overlay networks. So, it may not provide an accurate picture of connectivity in these networks.

Before you can connect to the instances, security group rules must be updated to allow ICMP and SSH. Chapter 6, Managing Security Groups, focuses on the implementation and administration of security group rules in more detail. To test connectivity, add ICMP and SSH access to a security group applied to the instances. Use the following command to determine the security group for this particular instance:

# neutron port-list --device-id=<instance id> -c security_groups

The resulting output can be seen in the following screenshot:

Verifying instance connectivity

Figure 7.29

Use the Neutron security-group-rule-create command to create rules within the respective security group, as follows:

Verifying instance connectivity

Figure 7.30

Using an SSH client, connect to an instance from either the router or DHCP namespace. The CirrOS image has a built-in user named cirros with the password cubswin:):

Verifying instance connectivity

Figure 7.31

Observe the routing table of the instance. The default gateway of the instance is the Neutron router created earlier in this chapter. Pinging an external resource from an instance should be successful provided external connectivity from the Neutron router exists:

Verifying instance connectivity

Figure 7.32

Observing default NAT behavior

The default behavior of the Neutron router is to source NAT traffic from instances that lack floating IPs when traffic egresses the external or gateway interface of the router. From the eth2.30 interface of the controller node, we can observe the ICMP traffic from the instances—sourcing from the real address, 10.30.0.3—as it heads toward the router:

Observing default NAT behavior

Figure 7.33

From the eth2.50 interface on the controller node, we can observe the ICMP traffic from the instances after it traverses the router sourcing as the router's address, 10.50.0.100:

Observing default NAT behavior

Figure 7.34

A look at the iptables chains within the router namespace reveals the NAT rules responsible for this behavior:

Observing default NAT behavior

Figure 7.35

In this configuration, instances can communicate with outside resources through the router as long as the instances initiate the connection. Outside resources cannot initiate connections directly to instances via their fixed IP address.

Assigning floating IPs

To initiate connections to instances behind Neutron routers from outside networks, you must configure a floating IP address and associate it with the instance. With Neutron, a floating IP is associated with the Neutron port that corresponds to the interface of the instance accepting connections.

Using the Neutron port-list command, determine the port ID of each instance that has been recently booted. The port-list command allows results to be filtered by device or instance ID, as shown in the following screenshot:

Assigning floating IPs

Figure 7.36

Using the Neutron floatingip-create command, create a single floating IP address and associate it with the port of the instance known as MyInstance-1:

Assigning floating IPs

Figure 7.37

From within the guest OS, verify that the instance can still communicate with outside resources:

Assigning floating IPs

Figure 7.38

From the eth2.50 interface on the controller node, we can observe the ICMP traffic from the instances after it traverses the router sourcing as the floating IP address, 10.50.0.101:

Assigning floating IPs

Figure 7.39

Within the router namespace, the floating IP is configured as a secondary address on the qg interface:

Assigning floating IPs

Figure 7.40

When the floating IP is configured as a secondary network address on the qg interface, the router can respond to ARP requests to the floating IP from the upstream gateway device and other Neutron routers or devices in the same external network.

A look at the iptables chains within the router namespace shows that the rules have been added to perform the 1:1 NAT translation from the floating IP to the fixed IP of MyInstance-1 and vice-versa:

Assigning floating IPs

Figure 7.41

Provided our client workstation can route to the external provider network, traffic can be initiated directly to the instance via the floating IP:

Assigning floating IPs

Figure 7.42

Reassigning floating IPs

Neutron provides the ability to quickly disassociate a floating IP from an instance or other network resource and associate it with another.

A listing of floating IPs shows the current association:

Reassigning floating IPs

Figure 7.43

Using the Neutron floatingip-disassociate and floatingip-associate commands, disassociate the floating IP from MyInstance-1 and associate it with MyInstance-2. This disassociation can be seen in the following screenshot:

Reassigning floating IPs

Figure 7.44

A floatingip-list shows that the floating IP is no longer associated with a port:

Reassigning floating IPs

Figure 7.45

Note

The floating IP is still owned by the tenant who created it and cannot be assigned to another tenant without first being deleted.

Using the Neutron floatingip-associate command, associate the floating IP with the port of MyInstance-2, as shown in the following screenshot:

Reassigning floating IPs

Figure 7.46

Observe the iptables rules within the router namespace. The NAT relationship is modified, and the traffic from MyInstance-2 will now appear as the floating IP:

Reassigning floating IPs

Figure 7.47

As a result of the new association, attempting an SSH connection to the floating IP may result in the following message on the client machine:

Reassigning floating IPs

Figure 7.48

The preceding message is a good indicator that traffic is being sent to a different host. Clearing the offending key and logging in to the instance reveals it to be MyInstance-2:

Reassigning floating IPs

Figure 7.49

At this point, we have successfully deployed two instances behind a single virtual router and verified the connectivity to and from the instances using floating IPs. In the next section, we will explore how these same tasks can be accomplished within the Horizon dashboard.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset