Chapter 4. Building a Virtual Switching Infrastructure

One of the core functions of OpenStack Networking is to provide connectivity to and from instances by programmatically configuring the network infrastructure of the cloud.

In the last chapter, we installed various Neutron services and the ML2 plugin across all nodes in the cloud. In this chapter, you will be introduced to networking concepts and architectures that Neutron relies on to provide connectivity to instances as well as multiple mechanism drivers that extend the functionality of the ML2 network plugin: the LinuxBridge, Open vSwitch, and L2 population drivers. You will be guided through the installation and configuration of the drivers and their respective agents, and we will lay a foundation for the creation of networks and instances in the chapters to come.

Virtual network devices

OpenStack is responsible for configuring and managing many different types of virtual and physical network devices and technologies across the cloud infrastructure.

Virtual network interfaces

OpenStack uses the libvirt KVM/QEMU driver to provide platform virtualization in default Nova configurations. When an instance is booted for the first time, Neutron assigns a virtual port to each network interface of the instance. KVM creates a virtual network interface called a tap interface on the compute node hosting the instance. The tap interface corresponds directly to a network interface within the guest instance. Through the use of a bridge, the host can expose the guest instance to a physical network.

Tip

In OpenStack, the name of a tap interface associated with an instance corresponds to the Neutron port UUID, or unique identifier, which the instance is plugged into.

Virtual network switches

Neutron supports many types of virtual and physical switches and includes built-in support for Linux bridges and Open vSwitch virtual switches.

Note

The terms bridge and switch are often used interchangeably in the context of Neutron and may be used in the same way throughout this book.

A Linux bridge is a virtual switch on a host that connects multiple network interfaces. When using Neutron, a bridge usually connects a physical interface to one or more virtual or tap interfaces. A physical interface includes Ethernet interfaces, such as eth1, or bonded interfaces, such as bond0. A virtual interface includes VLAN interfaces, such as eth1.100, as well as tap interfaces created by KVM. You can connect multiple physical or virtual network interfaces to a Linux bridge.

The following diagram provides a high-level view of a Linux bridge leveraged by Neutron:

Virtual network switches

Figure 4.1

In Figure 4.1, the Linux bridge, brqXXXX, is connected to a single physical interface, eth1, and three virtual interfaces, tap0, tap1, and tap2. The three tap interfaces correspond to a network interface within the respective guest instance. Traffic from eth0 in a virtual machine instance can be observed on the respective tap interface on the host as well as on the bridge interface and the physical interface connected to the bridge.

Open vSwitch operates as a software-based switch that uses virtual network bridges and flow rules to forward packets between hosts. Most Neutron setups that leverage Open vSwitch utilize at least three virtual switches or bridges, including a provider, integration, and tunnel bridge. These virtual switches are cross connected with one another, similar to how a physical switch may be connected to another physical switch with a cross connect cable.

Note

When wrong combinations of interfaces exist in a bridge, bridging loops may occur and cause issues with the network. Be sure not to manually modify the bridges managed by OpenStack to avoid these issues.

More information on how Linux bridges and Open vSwitch bridges connect instances to the network will be covered later in this chapter.

Configuring the bridge interface

In this installation, the eth2 physical network interface will be utilized for bridging purposes. On the controller and compute nodes, configure the eth2 interface within the /etc/network/interfaces file, as follows:

auto eth2
iface eth2 inet manual

Close and save the file and bring the interface up with the following command:

# ip link set dev eth2 up

Confirm that the interface is in an UP state using the ip link show dev eth2 command, as shown in the following screenshot:

Configuring the bridge interface

Figure 4.2

If the interface is up, it is ready for use in a Linux or Open vSwitch bridge.

Note

As the interface will be used in a bridge, an IP address cannot be applied directly to it. If there is an IP address applied to eth2, it will become inaccessible once the interface is placed in a bridge. Instead, consider applying the IP address to the bridge if you must have connectivity to this interface.

Overlay networks

Neutron supports overlay networking technologies that allow virtual networks to scale across the cloud with little to no change in the underlying physical infrastructure. To accomplish this, Neutron leverages L2-in-L3 overlay networking technologies, such as GRE and VXLAN. When configured accordingly, Neutron builds point-to-point tunnels between all network and compute nodes in the cloud using the management or another dedicated interface. These point-to-point tunnels create what is called a mesh network, where every host is connected to every other host. A cloud consisting of one controller running network services and three compute nodes will have a fully meshed overlay network that resembles Figure 4.3:

Overlay networks

Figure 4.3

Using the ML2 plugin, GRE- and VXLAN-based networks can be created by users at scale without any changes to the underlying switching infrastructure. When GRE- or VXLAN-based networks are created, a unique ID is specified that is used to encapsulate the traffic. Every packet between instances on different hosts is encapsulated on one host and sent to the other through a point-to-point GRE or VXLAN tunnel. When the packet reaches the destination host, the tunnel-related headers are stripped, and the packet is forwarded through the connected bridge to the instance.

The following diagram shows a packet encapsulated by a host:

Overlay networks

Figure 4.4

In Figure 4.4, the outer IP header source and destination addresses identify the endpoints of the tunnel. Tunnel endpoints include compute nodes and any node running the L3 and DHCP services, such as a network node. The inner IP header source and destination addresses identify the original sender and recipient of the payload.

Because GRE and VXLAN network traffic is encapsulated between hosts, many physical network devices cannot participate in these networks. As a result, GRE and VXLAN networks are effectively isolated from other networks in the cloud without the use of a Neutron router. More information on creating Neutron routers can be found in Chapter 7, Creating Standalone Routers with Neutron.

Connectivity issues when using overlay networks

One thing to be aware of when using overlay networking technologies is that the additional headers added to the packets may cause the packet to exceed the maximum transmission unit, or MTU. The MTU is the largest size of packet or frame that can be sent over a network. Encapsulating a packet with VXLAN headers may cause the packet size to exceed the default maximum, which is 1500 bytes. Connection issues caused by exceeding the MTU manifest themselves in strange ways; they can be seen in partial failures in connecting to instances over SSH or in a failure to transfer large payloads between instances, among others. Consider lowering the MTU of interfaces within virtual machine instances from 1500 bytes to 1450 bytes to account for the overhead of VXLAN encapsulation so as to avoid connectivity issues.

The DHCP agent and dnsmasq can be configured to push a lower MTU to instances within the DHCP lease offer. To configure a lower MTU, complete the following steps:

  1. On the controller node, modify the DHCP configuration file at /etc/neutron/dhcp_agent.ini and specify a custom dnsmasq configuration file, as follows:
    [DEFAULT]
    dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
  2. Next, create the custom dnsmasq configuration file at /etc/neutron/dnsmasq-neutron.conf and add the following contents:
    dhcp-option-force=26,1450
  3. Save and close the file. Restart the Neutron DHCP agent with the following command:
    # service neutron-dhcp-agent restart
    

When the instances are later created, the MTU can be observed within the instance using the ip link show <interface> command.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset