The Open vSwitch mechanism driver is included with the ML2 plugin and was installed in Chapter 3, Installing Neutron. The following sections will walk you through the configuration of Neutron and Nova to utilize the Open vSwitch driver and agent.
To install the Open vSwitch agent, issue the following command on all nodes:
# apt-get install neutron-plugin-openvswitch-agent
Dependencies, such as the Open vSwitch components openvswitch-common
and openvswitch-switch
, will be installed. If prompted to overwrite the neutron.conf
file, type N
at the [default=N]
prompt.
For Nova to properly connect instances to the network when using the Open vSwitch driver, the linuxnet_interface_driver
configuration option in /etc/nova/nova.conf
must be modified.
Update the linuxnet_interface_driver
configuration option in the Nova configuration file at /etc/nova/nova.conf
on all hosts to use the OVS interface driver:
[DEFAULT] ... linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
To properly connect the DHCP namespace tap interfaces to the integration bridge, the DHCP agent must be configured to use the Open vSwitch interface driver.
Update the interface_driver
configuration option in the Neutron DHCP agent configuration file at /etc/neutron/dhcp_agent.ini
on the controller node to use the OVS interface driver using the following code:
[DEFAULT] ... interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
Prior to ML2, the Open vSwitch plugin used its own configuration file and options. The [ovs]
and [agent]
option blocks are moved to the ML2 configuration file, and the most common options can be seen in the following code:
[ovs] bridge_mappings enable_tunneling tunnel_type integration_bridge tunnel_bridge local_ip [agent] tunnel_types
The bridge_mappings
configuration option describes the mapping of an artificial interface name or label to a network bridge configured on the server. Unlike the LinuxBridge plugin that configures multiple bridges containing individual VLAN or VXLAN interfaces, the Open vSwitch plugin uses a single bridge interface containing a single physical interface and flow rules to add, modify, or remove VLAN headers if necessary.
When networks are created, they are associated with an interface label, such as physnet1
. The physnet1
label is then mapped to a bridge, such as br-eth1
, which contains the eth1
physical interface. The mapping of the label to the bridge interface is handled by the bridge_mappings
option. This mapping can be observed as follows:
bridge_mappings = physnet1:br-eth1
The label itself must be consistent between all nodes in the environment. However, the bridge interface mapped to the label as well as the interface in the bridge itself, may be different. A difference in mappings is often observed when one node maps physnet1
to a bridge interface capable of 1 gigabit and another maps physnet1
to a bridge interface capable of 10 gigabits.
More than one interface mapping is allowed and can be added to the list using a comma as a separator, as seen in the following example:
bridge_mappings = physnet1:br-eth1,physnet2:br-eth2
In this installation, physnet2
will be used as the interface label and be mapped to the br-eth2
bridge. Update the ML2 configuration file at /etc/neutron/plugins/ml2/ml2_conf.ini
accordingly on all hosts by executing the following code:
[OVS] ... bridge_mappings = physnet2:br-eth2
To enable support for GRE and VXLAN, the enable_tunneling
configuration option must be set to true
. Open vSwitch versions newer than Version 1.10 should support both technologies. To determine the version of Open vSwitch you have installed, run ovs-vsctl –V
, as follows:
For better performance and reliability, Open vSwitch 2.3 or higher is recommended. For more information on how to download and install the latest Open vSwitch release, visit www.openvswitch.org.
To enable GRE and VXLAN tunneling support, update the enable_tunneling
configuration option in the [OVS]
section of the ML2 configuration file on all hosts:
[OVS] ... enable_tunneling = true
The tunnel_type
configuration option specifies the type of tunnel to use when utilizing tunnels. The two available options are gre
and/or vxlan
.
To enable only GRE tunnels, set tunnel_type
to gre
in the [OVS]
section of the ML2 configuration file:
[OVS] ... tunnel_type = gre
To enable only the VXLAN tunnels, set tunnel_type
to vxlan
in the [OVS]
section of the ML2 configuration file:
[OVS] ... tunnel_type = vxlan
To enable both GRE and VXLAN tunnels, specify both tunnel types separated by a comma:
[OVS] ... tunnel_type = vxlan,gre
The integration_bridge
configuration option specifies the name of the integration bridge used on each node. There is a single integration bridge per node that acts as the virtual switch where all virtual machine VIFs, otherwise known as virtual network interfaces, are connected. The default name of the integration bridge is br-int
and should not be modified.
Starting with the Icehouse release of OpenStack, the Open vSwitch agent automatically creates the integration bridge the first time the agent service is started. You do not need to add an interface to the integration bridge as Neutron is responsible for connecting network devices to this virtual switch.
The tunnel bridge is a virtual switch, which is similar to the integration and provider bridges, and is used to connect the GRE and VXLAN tunnel endpoints. Flow rules exist on this bridge that are responsible for properly encapsulating and decapsulating tenant traffic as it traverses the bridge.
The tunnel_bridge
configuration option specifies the name of the tunnel bridge. The default value is br-tun
and should not be modified. It is not necessary to create this bridge manually as Neutron does it automatically.
The local_ip
configuration option specifies the local IP address on the node that will be used to build the GRE or VXLAN overlay network between hosts when enable_tunneling
is set to true
. Refer to Chapter 1, Preparing the Network for OpenStack, for ideas on how the overlay network should be architected. In this installation, all guest traffic through overlay networks will traverse a dedicated VLAN over the eth1
interface configured in Chapter 2, Installing OpenStack.
The following table provides the interfaces and addresses to be configured on each host:
Hostname |
Interface |
IP address |
---|---|---|
|
|
|
|
|
|
|
|
|
Update the local_ip
configuration option in the [OVS]
section of the ML2 configuration file accordingly on all hosts.
On the controller
node:
[OVS] ... local_ip = 172.18.0.100
[OVS] ... local_ip = 172.18.0.101
On compute02
:
[OVS] ... local_ip = 172.18.0.102
The tunnel_types
configuration option specifies the types of tunnels supported by the agent. The two available options are gre
and/or vxlan
. If left unconfigured, the default value is gre
when enable_tunneling
is set to true
. If you are using only vxlan
, set this option to vxlan
.
Update the tunnel_types
configuration option in the [agent]
section of the ML2 configuration file accordingly on all hosts:
[agent] tunnel_types = vxlan,gre
Now that the appropriate OpenStack configuration files have been modified to use Open vSwitch as the networking driver, certain services must be started or restarted for the changes to take effect.
The Open vSwitch network agent should be restarted on all nodes; the following code needs to be executed:
# service neutron-plugin-openvswitch-agent restart
The following services should be restarted on the controller node:
# service nova-api restart # service neutron-server restart # service neutron-dhcp-agent restart
The following service should be restarted on the compute nodes:
# service nova-compute restart
To verify that the Open vSwitch network agents on all nodes have properly checked in, issue the neutron agent-list
command on the controller node:
The Open vSwitch agents on the controller and compute nodes should be visible in the output with a smiley face under the alive
column. If a node is not present or the status is XXX
, troubleshoot agent connectivity issues by observing the log messages found in /var/log/neutron/openvswitch-agent.log
on the respective host.