The LinuxBridge mechanism driver is included with the ML2 plugin and was installed in Chapter 3, Installing Neutron. The following sections will walk you through the configuration of Neutron and Nova to utilize the LinuxBridge driver and agent.
To install the LinuxBridge agent, issue the following command on all nodes:
# apt-get install neutron-plugin-linuxbridge-agent
If prompted to overwrite the neutron.conf
file, type N
at the [default=N]
prompt.
In order to properly connect instances to the network, Nova Compute must be aware that LinuxBridge is the networking plugin. The linuxnet_interface_driver
configuration option in /etc/nova/nova.conf
instructs Nova Compute on how to properly connect instances to the network.
Update the linuxnet_interface_driver
configuration option in the Nova configuration file at /etc/nova/nova.conf
on all hosts to use the LinuxBridge interface driver with the following code:
[DEFAULT] ... linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver
For Neutron to properly connect DHCP namespace interfaces to the appropriate network bridge, the DHCP agent must be configured to use the LinuxBridge interface driver.
Update the interface_driver
configuration option in the Neutron DHCP agent configuration file at /etc/neutron/dhcp_agent.ini
on the controller node to use the LinuxBridge interface driver:
[DEFAULT] ... interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
Additional DHCP agent configuration options can be found in the previous chapter.
Prior to ML2, the LinuxBridge plugin used its own configuration file and options. The [linux_bridge]
and [vxlan]
option blocks are moved to the ML2 configuration file, and the most common options can be seen in the following code:
[linux_bridge] ... physical_interface_mappings
[vxlan] ... enable_vxlan l2_population local_ip
The physical_interface_mappings
configuration option describes the mapping of an artificial interface name or label to a physical interface in the server. When networks are created, they are associated with an interface label, such as physnet2
. The physnet2
label is then mapped to a physical interface, such as eth2
, by the physical_interface_mappings
option. This mapping can be observed as follows:
physical_interface_mappings = physnet2:eth2
The chosen label must be consistent between all nodes in the environment. However, the physical interface mapped to the label may be different. A difference in mappings is often observed when one node maps physnet2
to a 1-Gbit interface and another maps physnet2
to a 10-Gbit interface.
More than one interface mapping is allowed, and they can be added to the list using a comma as the separator:
physical_interface_mappings = physnet1:eth1,physnet2:eth2
In this installation, the eth2
interface will be utilized as the physical network interface, which means that any VLAN provided for use by tenants must traverse eth2
. The physical switch port connected to eth2
must support 802.1q VLAN tagging if VLAN networks are to be created by tenants.
Configure the LinuxBridge plugin to use physnet2
as the physical interface label and eth2
as the physical network interface by updating the ML2 configuration file accordingly on all hosts, as follows:
[linux_bridge] ... physical_interface_mappings = physnet2:eth2
To enable support for VXLAN, the enable_vxlan
configuration option must be set to true
. Update the enable_vxlan
configuration option
in the [vxlan]
section of the ML2 configuration file accordingly on all hosts with the following code:
[vxlan] ... enable_vxlan = true
To enable support for the L2 population driver, the l2_population
configuration option must be set to true
. Update the l2_population
configuration option in the [vxlan]
section of the ML2 configuration file accordingly on all hosts using the following code:
[vxlan] ... l2_population = true
The local_ip
configuration option specifies the local IP address on the node that will be used to build the VXLAN overlay between hosts when enable_vxlan
is set to true
. Refer to Chapter 1, Preparing the Network for OpenStack, for ideas on how the overlay network should be architected. In this installation, all guest traffic through overlay networks will traverse a dedicated VLAN over the eth1
interface configured in Chapter 2, Installing OpenStack.
The following table provides the interfaces and addresses to be configured on each host:
Hostname |
Interface |
IP address |
---|---|---|
|
|
|
|
|
|
|
|
|
Update the local_ip
configuration option in the [vxlan]
section of the ML2 configuration file accordingly on all hosts.
On the controller
node, use the following address:
[vxlan] ... local_ip = 172.18.0.100
On compute01
, use the following address:
[vxlan] ... local_ip = 172.18.0.101
On compute02
, use the following address:
[vxlan] ... local_ip = 172.18.0.102
If the OpenStack configuration files have been modified to use LinuxBridge as the networking plugin, certain services must be restarted for the changes to take effect.
The following service should be restarted on all hosts in the environment:
# service neutron-plugin-linuxbridge-agent restart
Also, the following services should be restarted on the controller node:
# service nova-api restart # service neutron-server restart # service neutron-dhcp-agent restart
Then, the following service should be restarted on the compute nodes:
# service nova-compute restart
To verify that the LinuxBridge network agents on all nodes have properly checked in, issue the neutron agent-list
command on the controller node:
The LinuxBridge agents on the controller and compute nodes should be visible in the output with a smiley face under the alive
column. If a node is not present or the status is XXX
, troubleshoot agent connectivity issues by observing the log messages found in /var/log/neutron/neutron-plugin-linuxbridge-agent.log
on the respective host.