There are a number of ways to automate an installation of OpenStack. These methods predominantly make use of configuration management tools such as Chef, Puppet, and Ansible. In this recipe, we will see how to use Ansible for the installation of OpenStack and how the Playbooks make use of LXC containers, in which isolate resources and filesystems to the service are running in the container. At the time of writing, the Ansible Playbooks that are used for installing OpenStack are hosted on Stackforge. These will soon move to the OpenStack GitHub branch as an official project.
The environment that we will be using in this recipe will consist of seven physical servers:
Each server will have at least two network cards installed and utilize VLANs (a total of four distinct networks are created for the installation). In production, it is assumed you will have at least four network cards so that you create two bonded pair of interfaces and appropriately cable them to different HA switches for resilience.
To better understand the networking, refer to the following diagram:
The following networks will be used for our OpenStack installation using Ansible:
eth0
: This will be used for accessing the host itself (untagged VLAN). This interface will have an IP assigned on the host subnet. This will also be used for storage traffic. An optional br-storage
bridge and interface can be used for dedicated storage traffic. This isn't used in this section.eth0.1000
: This will be the VLAN (tag 1000) interface that the container bridge (br-mgmt
) will be created on. The eth0.1000
interface will not have an IP assigned to it directly; this IP will be assigned to the bridge (br-mgmt
) as described here.br-mgmt
: This will be the bridge created that connects to eth0.1000
and is a network utilized solely for container to container network traffic. This network carries the communication between OpenStack services, such as Glance requiring access to Keystone. This br-mgmt
bridge will have an IP address on the management network (also called the container network) so our hosts can access the containers.eth1
: This will be the network interface that all VLAN based Neutron traffic will traverse. The controllers and computes will need this interface configured. The storage nodes and HA Proxy node do not need this configuring.br-vlan
: This will be a bridge that connects to eth1
. Neither br-vlan
nor eth1
will have an IP assigned as OpenStack Neutron controls these on the fly when networks of type VLAN are created.eth1.2000
: This will be the VLAN (tag 2000
) interface that a VXLAN network will be created on. OpenStack Neutron has the ability to create private tenant networks of type VXLAN. This will be created over this interface.br-vxlan
: This will be a bridge that includes eth1.2000
to carry the data created in a VXLAN tunnel network. For our OpenStack environment, this network will allow a user to create Neutron networks of type VXLAN that will be overlaid over this network. This bridge will have an IP assigned to it in the tunnel network.The first stage is to ensure that the seven hosts described in this section are configured and ready for installation of OpenStack using the Ansible Playbooks; to do so, follow these steps.
/etc/network/interfaces
file with the following contents (consider using bonded interfaces for production, and edit to suit your network details):# Host Interface auto eth0 iface eth0 inet static address 192.168.1.101 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 192.168.1.1 # Neutron Interface, no IP assigned auto eth1 iface eth1 inet manual # Container management VLAN interface iface eth0.1000 inet manual vlan-raw-device eth0 # OpenStack VXLAN (tunnel/overlay) VLAN interface iface eth1.2000 inet manual vlan-raw-device eth1
/etc/network/interfaces
file to add in the matching bridge information, as follows:# Bridge for Container network auto br-mgmt iface br-mgmt inet static bridge_stp off bridge_waitport 0 bridge_fd 0 # Bridge port references tagged interface bridge_ports eth1.1000 address 172.16.0.101 netmask 255.255.0.0 dns-nameservers 192.168.1.1 # Bridge for vlan network auto br-vlan iface br-vlan inet manual bridge_stp off bridge_waitport 0 bridge_fd 0 # Notice this bridge port is an Untagged interface bridge_ports eth1 # Bridge for vxlan network auto br-vxlan iface br-vxlan inet static bridge_stp off bridge_waitport 0 bridge_fd 0 # Bridge port references tagged interface bridge_ports eth1.2000 address 172.29.240.101 netmask 255.255.252.0 dns-nameservers 192.168.1.1
sudo service networking restart
fping
, as shown here:# host network (eth0) fping -g 192.168.1.101 192.168.1.107 # container network (br-mgmt) fping -g 172.16.0.101 172.16.0.107 # For Computes and Controllers Only # tunnel network (br-vxlan) fping -g 172.29.240.101 172.29.240.105
Setting up the networking correctly is important. Retrospectively, altering the network once an installation of OpenStack has occurred can be tricky.
We used two physical interfaces (if using bonding, it is a total of four but is referred to as two), allocating appropriate VLANs and dropping the created interfaces into specific bridges. These bridged interfaces, br-mgmt
, br-vxlan
and br-vlan
, are referenced directly in the Ansible Playbook configurations, so do not change these names.
The host network on eth0
is the network that will have the default gateway of your LAN, and this network will be used to access the Internet to pull down the required packages as part of the OpenStack installation.
We create a VLAN tagged interface eth0.1000
on eth0
, which will be used for container-to-container traffic. The Ansible Playbooks install the OpenStack services in LXC containers, and these containers must be able to communicate with each other. This network is not routable and is only used for inter-container communication. This VLAN tagged interface is dropped into the bridge, br-mgmt
. The br-mgmt
bridge is given an IP address on this management (container) network so that the hosts can communicate with the containers when they eventually get created in the next two recipes.
The second interface (or second bonded interface) carries the traffic for Neutron, so only the controllers and computes need this interface. As we are configuring our environment to carry both VLAN Neutron tenant networks and VXLAN tenant networks, we first create a VLAN tagged interface eth1.2000
and drop this into the bridge br-vxlan
. As this is for VXLAN traffic, we assign an IP to this bridge. Now, tunnels can be created over this network. This network doesn't have any routes associated with it. We then create a br-vlan
bridge and drop the untagged interface eth1
into this. This is because when we eventually come to create Neutron tenant networks of type VLAN, Neutron adds the tags to this untagged interface.