To demonstrate the process of creating and connecting standalone Neutron routers to both tenant and external provider networks so as to provide network connectivity to instances, this section of the chapter is dedicated to a walkthrough that leverages the fundamental Neutron concepts that have been discussed in the book so far.
A VLAN provider network will be created and used as an external gateway network for the Neutron router, while a VLAN tenant network will be created and used by instances. A Neutron router will be created and used to route traffic from the instances in the tenant network to the Internet, and floating IPs will be created and used to provide direct connectivity to instances.
In this demonstration, a Cisco Adaptive Security Appliance (ASA) device serves as the physical network gateway device and is connected to the Internet. The inside interface of the Cisco ASA device has a configured IP address of 10.50.0.1/24 on VLAN 50 and will serve as the gateway for an external VLAN provider network created in the following section.
The following figure is the logical diagram of the network to be built as part of this demonstration:
In the preceding figure, a Cisco ASA device serves as the external network device in front of the OpenStack Cloud.
In order to provide instances with external connectivity, a Neutron router must be connected to a provider network that is eligible for use as an external network.
Using the Neutron net-create
command, create a provider network with the following attributes:
GATEWAY_NET
VLAN
50
physnet2
True
True
The following screenshot displays the resulting output of the net-create
command:
Using the Neutron subnet-create
command, create a subnet with the following attributes:
GATEWAY_SUBNET
10.50.0.0
255.255.255.0
10.50.0.1
Disabled
10.50.0.100
- 10.50.0.254
The following screenshot displays the resulting output of the subnet-create
command:
Create a router using the Neutron router-create
command with the following attribute:
MyRouter
The following screenshot displays the resulting output of the router-create
command:
When attaching a Neutron router to a provider network, the network must have its router:external
attribute set to true
to be eligible for use as an external network.
Using the following Neutron router-gateway-set
command, attach the MyRouter
router to the GATEWAY_NET
network:
(neutron) router-gateway-set MyRouter GATEWAY_NET Set gateway for router MyRouter
Using the Neutron router-port-list
command, determine the external IP of the router, as follows:
Once the gateway interface has been added, the router will be scheduled to an eligible L3 agent. Using the Neutron l3-agent-list-hosting-router
command, you can determine which L3 agent the router was scheduled to, as follows:
The L3 agent is responsible for creating a network namespace that acts as the router. For easy identification, the name of the namespace incorporates the router's ID:
root@controller01:~# ip netns qrouter-bc2a45f6-db3b-4ca4-a5ee-340af36a8293
Inside the namespace, you will find an interface with the prefix qg
. The qg
interface is the gateway or external interface of the router. Neutron automatically provides the qg
interface with an IP address from the allocation pool of the external network's subnet:
When using the Open vSwitch driver, the interface is connected directly to the integration bridge. When using the LinuxBridge driver, as in this example, the qg
interface is one end of a veth pair whose other end is connected to a bridge on the host. Using ethtool
, we can determine the peer index of the corresponding interface on the host:
Using ip link show
, the corresponding interface (peer index 23) can be found by searching for the index on the controller:
When using the LinuxBridge driver, the veth interface is connected to a bridge that corresponds to the external network:
The namespace can communicate with other devices in the same subnet through the bridge. The other interface in the bridge, eth2.50
, tags traffic as VLAN 50 as it exits the bridge out the physical interface eth2
.
Observe the route table within the namespace. The default gateway address corresponds to the address defined in the subnet's gateway_ip
attribute. In this case, it is 10.50.0.1
:
To test external connectivity from the Neutron router, ping the edge gateway device from within the router namespace:
Successful ping attempts from the router namespace to the physical gateway device demonstrate a proper external VLAN configuration on both hardware- and software-based networking components.
Within the admin
tenant, create an internal network for instances. In this demonstration, a network will be created with the following attribute:
TENANT_NET
The following screenshot demonstrates the resulting output of the net-create
command:
Note how Neutron automatically determines the network type, physical network, and segmentation ID for the network. As the net-create
command was executed without specific provider attributes, Neutron relied on the configuration found in the plugin configuration file to determine the type of network to create.
The following configuration options in the ML2 configuration file were used to determine the network type, physical network, and segmentation ID:
tenant_network_types = vlan,vxlan network_vlan_ranges = physnet2:30:33
Neutron consumes all available VLAN segmentation IDs as networks are created before moving on to VXLAN networks.
Using the Neutron subnet-create
command, create a subnet with the following attributes:
TENANT_SUBNET
10.30.0.0
255.255.255.0
<auto>
<auto>
8.8.8.8
The following screenshot demonstrates the resulting output of the subnet-create
command:
Using the Neutron router-interface-add
command, attach the TENANT_SUBNET
subnet to MyRouter
:
(neutron) router-interface-add MyRouter TENANT_SUBNET Added interface 24b55e34-227f-4fe4-b341-35ff2f49a099 to router MyRouter.
Using the Neutron router-port-list
command, determine the internal IP of the router:
When a particular port ID is not specified while using the router-interface-add
command, the IP address assigned to the internal router interface defaults to the address set in the gateway_ip
attribute of the subnet.
Inside the router namespace, a new interface has been added with a prefix of qr
. A qr
interface is an internal interface of the router:
When using the Open vSwitch driver, the interface is connected directly to the integration bridge. When using the LinuxBridge driver, as in this example, every qr
interface is at one end of a veth pair, whose other end is connected to a bridge on the host. When using the LinuxBridge driver, the interface is connected to a bridge that corresponds to a tenant network:
The router namespace can communicate with other devices in the same subnet through the bridge. The eth2.30
interface in the bridge tags traffic as VLAN 30 as it exits the bridge out the eth2
parent interface.
Create two instances with the following characteristics:
MyInstance-1
, MyInstance-2
TENANT_NET
CirrOS
m1.tiny
Use the glance image-list
command to determine the ID of the CirrOS image downloaded in Chapter 2, Installing OpenStack:
Use the following nova boot
command to boot two instances in the TENANT_NET
network:
The nova list
command can be used to return a list of instances and their IP addresses:
On one or both of the compute nodes, depending on where the instances were scheduled, a Linux bridge is created that corresponds to the TENANT_NET
network. Connected to the bridge, we can find a VLAN interface and one or more tap interfaces that correspond to the instances:
When a network and subnet are created with DHCP enabled, a network namespace is created by the neutron-dhcp-agent
service, which serves as a DHCP server for the network. On the host running the neutron-dhcp-agent
service, ip netns
can be used to reveal the namespace:
For easy identification, the name of a DHCP namespace corresponds to the ID of the network it is serving. Inside the namespace, an interface with the prefix ns
is created and assigned an address from the allocation pool of the subnet:
When using the Open vSwitch driver, the ns
interface is connected directly to the integration bridge. When using the LinuxBridge driver, as in this example, the ns
interface is at one end of a veth pair, whose other end is connected to a bridge on the host. The namespace can communicate with other devices in the same subnet through the bridge. When using the LinuxBridge driver, the interface is connected to a bridge that corresponds to the network.
As the instances came online, they sent a DHCP request that was served by the dnsmasq
process in the DHCP namespace. A populated ARP table within the namespace confirms that the instances are functioning in the VLAN on layer 2:
A populated ARP table can only be used to verify connectivity when the l2population driver is not in use. The l2population driver is used to prepopulate the ARP and forward tables to reduce the overhead on overlay networks. So, it may not provide an accurate picture of connectivity in these networks.
Before you can connect to the instances, security group rules must be updated to allow ICMP and SSH. Chapter 6, Managing Security Groups, focuses on the implementation and administration of security group rules in more detail. To test connectivity, add ICMP and SSH access to a security group applied to the instances. Use the following command to determine the security group for this particular instance:
# neutron port-list --device-id=<instance id> -c security_groups
The resulting output can be seen in the following screenshot:
Use the Neutron security-group-rule-create
command to create rules within the respective security group, as follows:
Using an SSH client, connect to an instance from either the router or DHCP namespace. The CirrOS image has a built-in user named cirros
with the password cubswin:)
:
Observe the routing table of the instance. The default gateway of the instance is the Neutron router created earlier in this chapter. Pinging an external resource from an instance should be successful provided external connectivity from the Neutron router exists:
The default behavior of the Neutron router is to source NAT traffic from instances that lack floating IPs when traffic egresses the external or gateway interface of the router. From the eth2.30
interface of the controller node, we can observe the ICMP traffic from the instances—sourcing from the real address, 10.30.0.3
—as it heads toward the router:
From the eth2.50
interface on the controller node, we can observe the ICMP traffic from the instances after it traverses the router sourcing as the router's address, 10.50.0.100
:
A look at the iptables chains within the router namespace reveals the NAT rules responsible for this behavior:
In this configuration, instances can communicate with outside resources through the router as long as the instances initiate the connection. Outside resources cannot initiate connections directly to instances via their fixed IP address.
To initiate connections to instances behind Neutron routers from outside networks, you must configure a floating IP address and associate it with the instance. With Neutron, a floating IP is associated with the Neutron port that corresponds to the interface of the instance accepting connections.
Using the Neutron port-list
command, determine the port ID of each instance that has been recently booted. The port-list
command allows results to be filtered by device or instance ID, as shown in the following screenshot:
Using the Neutron floatingip-create
command, create a single floating IP address and associate it with the port of the instance known as MyInstance-1
:
From within the guest OS, verify that the instance can still communicate with outside resources:
From the eth2.50
interface on the controller node, we can observe the ICMP traffic from the instances after it traverses the router sourcing as the floating IP address, 10.50.0.101
:
Within the router namespace, the floating IP is configured as a secondary address on the qg
interface:
When the floating IP is configured as a secondary network address on the qg
interface, the router can respond to ARP requests to the floating IP from the upstream gateway device and other Neutron routers or devices in the same external network.
A look at the iptables chains within the router namespace shows that the rules have been added to perform the 1:1 NAT translation from the floating IP to the fixed IP of MyInstance-1
and vice-versa:
Provided our client workstation can route to the external provider network, traffic can be initiated directly to the instance via the floating IP:
Neutron provides the ability to quickly disassociate a floating IP from an instance or other network resource and associate it with another.
A listing of floating IPs shows the current association:
Using the Neutron floatingip-disassociate
and floatingip-associate
commands, disassociate the floating IP from MyInstance-1
and associate it with MyInstance-2
. This disassociation can be seen in the following screenshot:
A floatingip-list
shows that the floating IP is no longer associated with a port:
Using the Neutron floatingip-associate
command, associate the floating IP with the port of MyInstance-2
, as shown in the following screenshot:
Observe the iptables rules within the router namespace. The NAT relationship is modified, and the traffic from MyInstance-2
will now appear as the floating IP:
As a result of the new association, attempting an SSH connection to the floating IP may result in the following message on the client machine:
The preceding message is a good indicator that traffic is being sent to a different host. Clearing the offending key and logging in to the instance reveals it to be MyInstance-2
:
At this point, we have successfully deployed two instances behind a single virtual router and verified the connectivity to and from the instances using floating IPs. In the next section, we will explore how these same tasks can be accomplished within the Horizon dashboard.