Choosing a plugin and driver

Neutron networking plugins, drivers, and agents are responsible for implementing features that provide network connectivity to and from instances. The ML2 plugin can leverage multiple layer 2 technologies simultaneously through the use of mechanism drivers. The two drivers discussed in this book, LinuxBridge and Open vSwitch, implement network connectivity in different ways.

Using the LinuxBridge driver

When configured to utilize the ML2 plugin and LinuxBridge driver, Neutron relies on the bridge, 8021q, and vxlan kernel modules to properly connect instances and other network resources to the virtual switch and forward the traffic. The LinuxBridge driver is popular for its dependability and ease of troubleshooting but lacks support for some advanced Neutron features, such as distributed virtual routers.

In a LinuxBridge-based network implementation, there are five types of interfaces managed by Neutron, which are:

  • Tap interfaces
  • Physical interfaces
  • VLAN interfaces
  • VXLAN interfaces
  • Linux bridges

A tap interface is created and used by a hypervisor, such as QEMU/KVM, to connect the guest operating system in a virtual machine instance to the host. These virtual interfaces on the host correspond to a network interface inside the guest instance. An Ethernet frame sent to the tap device on the host is received by the guest operating system, and the frames received from a guest operating system are injected into the host network stack.

A physical interface represents an interface on the host that is plugged into physical network hardware. Physical interfaces are often labeled eth0, eth1, em0, em1, and so on, and vary depending on the host operating system.

Linux supports 802.1q VLAN tagging through the use of virtual VLAN interfaces. A VLAN interface can be created using iproute2 commands or the traditional vlan utility and 8021q kernel module. A VLAN interface is often labeled ethX.<vlan> and is associated with its respective physical interface, ethX.

A VXLAN interface is a virtual interface that is used to encapsulate and forward traffic based on the parameters configured during the creation of an interface, such as a VXLAN Network Identifier (VNI) and VXLAN Tunnel End Point (VTEP). The function of a VTEP is to encapsulate the virtual machine instance traffic within an IP header across an IP network. Traffic is segregated from other VXLAN traffic using an ID provided by the VNI. The instances themselves are unaware of the outer network topology providing connectivity between VTEPs.

A Linux bridge is a virtual interface that connects multiple network interfaces. In Neutron, a bridge usually includes a physical interface and one or more virtual or tap interfaces. Linux bridges are a form of virtual switches.

Using the Open vSwitch driver

Within OpenStack Networking, Open vSwitch operates as a software-based switch that uses virtual network bridges and flow rules to forward packets between hosts. Although it is capable of supporting many technologies and protocols, only a subset of Open vSwitch features are leveraged by Neutron.

There are three main components of Open vSwitch to be aware of:

  • The kernel module: The Open vSwitch kernel module is the equivalent of ASICs on a hardware switch. It is the data plane of the switch where all packet processing takes place.
  • The vSwitch daemon: The Open vSwitch daemon, ovs-vswitchd, is a Linux process that runs in user space on every physical host and dictates how the kernel module will be programmed.
  • The database server: Open vSwitch uses a local database on every physical host called Open vSwitch Database Server (OVSDB), which maintains the configuration of virtual switches.

When configured to utilize the Open vSwitch mechanism driver, Neutron relies on the bridge and openvswitch kernel modules along with user-space utilities, such as ovs-vsctl and ovs-ofctl, to properly manage the Open vSwitch database and connect instances and other network resources to virtual switches.

In an Open vSwitch-based network implementation, there are five distinct types of virtual networking devices:

  • Tap devices
  • Linux bridges
  • Virtual Ethernet cables
  • OVS bridges
  • OVS patch ports

Tap devices and Linux bridges were described briefly in the previous section, and their use in an Open vSwitch-based network remains the same. Virtual Ethernet, or veth, cables are virtual interfaces that mimic network patch cables. An Ethernet frame sent to one end of a veth cable is received by the other end, much like a real network patch cable. Neutron also makes use of veth cables to make connections between various network resources, including namespaces and bridges.

An OVS bridge behaves like a physical switch, only one that is virtualized. Neutron connects the interfaces used by DHCP or router namespaces and instance tap interfaces to OVS bridge ports. The ports themselves can be configured much like a physical switch port. Open vSwitch maintains information about connected devices, including MAC addresses and interface statistics.

Open vSwitch has a built-in port type that mimics the behavior of a Linux veth cable, but it is optimized for use with OVS bridges. When connecting two Open vSwitch bridges, a port on each switch is reserved as a patch port. Patch ports are configured with a peer name that corresponds to the patch port on the other switch. Graphically, it looks something similar to this:

Using the Open vSwitch driver

Figure 4.5

In Figure 4.5, two OVS bridges are cross connected via a patch port on each switch.

Open vSwitch patch ports are used to connect Open vSwitch bridges to each other, while Linux veth interfaces are used to connect Open vSwitch bridges to Linux bridges or Linux bridges to other Linux bridges.

Using the L2 population driver

The L2 population driver was introduced in the Havana release alongside the ML2 plugin. It enables broadcast, multicast, and unicast traffic to scale out on large overlay networks.

The L2 population driver works to prepopulate bridge-forwarding tables on all hosts to eliminate normal switch learning behaviors as broadcasts through an overlay network are costly operations due to encapsulation. As Neutron is seen as the source of truth for the logical layout of networks and instances created by tenants, it can easily prepopulate forwarding tables consisting of MAC addresses and destination hosts with this information. The L2 population driver also implements an ARP proxy on each host, eliminating the need to broadcast ARP requests across the overlay network. Each node is able to intercept an ARP request from an instance or router and proxy the response to the requestor.

Note

When using overlay networks, it is highly recommended to configure the L2 population mechanism driver along with the LinuxBridge or Open vSwitch driver.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset