The installation of this service takes place over three nodes: the controller node, the network node, and the compute node.
The same installation procedure that we have seen in the previous services will be followed on the controller node. We will carry out the following steps:
After this, we will initially configure the system, and perform the following steps:
As usual, let's create our checklist to ensure we have all the prerequisite data beforehand:
Name |
Info |
---|---|
Access to the Internet |
Yes |
Proxy Needed |
No |
Proxy IP and port |
Not applicable |
Node name |
|
Node IP address |
172.22.6.95 |
Node OS |
Ubuntu 14.04.1 LTS |
Neutron DB password |
|
Neutron Keystone password |
|
Neutron port |
9696 |
Nova Keystone password |
|
Rabbit MQ password |
|
Metadata password |
|
We create a blank database after logging in to the MySQL server:
mysql –u root –p
Enter the password: dbr00tpassword
Once in the database, execute the following command:
create database neutron;
This will create an empty database called neutron
. Let us now set up the Nova database user credentials as we did earlier:
GRANT ALL PRIVILEGES ON nova.* TO 'neutron'@'localhost' IDENTIFIED BY 'n3utron'; GRANT ALL PRIVILEGES ON nova.* TO 'neutron'@'%' IDENTIFIED BY 'n3utron';
This allows the username neutron
, using our password, to access the database called neutron
.
The Neutron control components are installed using the aptitude package manager by using the following command:
sudo apt-get install neutron-server neutron-plugin-ml2 python-neutronclient
Ensure that these are installed successfully.
Now, let's look at some of the initial configuration tasks on the controller node.
Create the user in Keystone; by now, you will be familiar with exporting the credentials in order to use the different OpenStack command line utilities:
keystone user-create --name neutron --pass n3utronpwd
You should see something like the following screenshot:
Then assign the admin
role to the user by running the following command:
keystone user-role-add --user neutron --tenant service --role admin
The Neutron service is created using the following command:
keystone service-create --name neutron --type network --description "OpenStack Networking"
The service will look as follows:
We will have to note the id of the service, which we will use in the next section. In our case the id is 73376c096f154179a293a83a22cce643
.
The endpoint is created using the following command, where you replace the ID with the ID you received during service creation:
keystone endpoint-create --service-id 73376c096f154179a293a83a22cce643 --publicurl http://OSControllerNode:9696 --internalurl http://OSControllerNode:9696 --adminurl http://OSControllerNode:9696 --region dataCenterOne
On the controller node, we have a few files to modify:
/etc/neutron/neutron.conf
: This file is used to configure Neutron/etc/neutron/plugins/ml2/ml2_conf.ini
: This file helps configure the ML2 plugin/etc/nova/nova.conf
: This allows Nova to use Neutron rather than the default Nova networkingLet's view one file at a time. However before we proceed, we will need the ID of the service tenant that we created. We can obtain it by using the Keystone command keystone tenant-list
and picking the ID of the service tenant.
We will have to export the variables (or source the file, as we have done in the past) as shown in the following screenshot:
So, the service tenant ID is 8067841bed8547b0a21459ff4c8d58f7
. This will be different for you, so substitute this in the following configuration.
In the etc/neutron/neutron.conf
file, make the following changes:
[database]
section:connection = mysql://neutron:n3utron@OSControllerNode/neutron
[default]
section:rpc_backend = rabbit rabbit_host = OSControllerNode rabbit_password = rabb1tmqpass auth_strategy = keystone core_plugin = ml2 service_plugins = router allow_overlapping_ips = True notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://OSControllerNode:8774/v2 nova_admin_auth_url = http://OSControllerNode:35357/v2.0 nova_region_name = dataCenterOne nova_admin_username = nova nova_admin_tenant_id = 8067841bed8547b0a21459ff4c8d58f7 nova_admin_password = n0vakeypwd verbose = True
[keystone_authtoken]
section:auth_uri = http://OSControllerNode:5000/v2.0 identity_uri = http://OSControllerNode:35357 admin_tenant_name = service admin_user = neutron admin_password = n3utronpwd
These changes need to be done in the Neutron configuration.
Now, in the /etc/neutron/plugins/ml2/ml2_conf.ini
file, make the following changes:
[ml2]
section, enable the GRE and flat networking as follows:type_drivers = flat,gre tenant_network_types = gre mechanism_drivers = openvswitch
[ml2_type_gre]
section, enable the tunnel ID ranges—these don't need to be in the physical network, as they will only be between the compute and network node:tunnel_id_ranges = 1:1000
[securitygroup]
section, make the following changes:enable_security_group = True enable_ipset = True firewall_driver= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[default]
section, set the following:network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver
[neutron]
section, enable the metadata-related configuration, which will be enabled on the network node:url = http://OSControllerNode:9696 auth_strategy = keystone admin_auth_url = http://OSControllerNode:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = n3utronpwd service_metadata_proxy = True metadata_proxy_shared_secret = m3tadatapwd
This will enable Neutron for Nova in line with the configuration.
The Neutron database can now be populated by running the following command as root user:
/bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron
To finalize the installation, delete the SQLite database that comes along with the Ubuntu packages:
rm –rf /var/lib/neutron/neutron.sqlite
Restart all the services, including Nova and Neutron, by running the following commands:
service nova-api restart service nova-scheduler restart service nova-conductor restart service neutron-server restart
At this point, the installation is complete on the controller node.
The network node is the fourth and final node that we will be using for our setup. This node needs to have at least three network cards—you may recall this from the architecture design that we saw in Chapter 1, An Introduction to OpenStack:
The roles of these networks are fairly straightforward: the management network installs and manages the nodes; the external network provides access to the network; and the tunnel network is used to tunnel traffic between the compute nodes and the network nodes.
Before we begin working, let's prepare our checklist so that we have all the information about the system handy, as follows:
Name |
Info |
---|---|
Access to the Internet |
Yes |
Proxy needed |
No |
Proxy IP and port |
Not applicable |
Node name |
|
Node IP address |
172.22.6.98 – Mgmt network 10.0.0.1 – Tunnel interface |
Node OS |
Ubuntu 14.04.1 LTS |
Neutron DB password |
|
Neutron Keystone password |
|
Neutron port |
9696 |
Nova Keystone password |
|
Rabbit MQ password |
|
Metadata password |
|
External interface |
|
On the network node, we will need to make some changes, which will help us set up network forwarding before we go into the actual installation.
We start by editing the /etc/sysctl.conf
file to ensure the following parameters are set:
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
These lines are needed on the network node in order for it to be able to forward packets from one interface to the other, essentially behaving like a router. We also disable the Reverse Path (rp) filter so that the kernel doesn't do the source validation; this is required so that we can use the elastic IPs. In order to ensure that the changes take effect, reload the system control as follows:
sudo sysctl –p
This command reloads the sysctl.conf
file that we modified.
We will install the Neutron packages for the ML2 plugin and the OVS agents along with Layer 3 and DHCP agents:
sudo apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent
Once the packages are installed, we will configure them. The Neutron configuration is similar to that on the controller node; the ML2 Plugin, DHCP agent, and the L3 agents are additional. We will also configure the metadata agent, which will be used to push the metadata on all the compute nodes when they come up.
Now, let us look at some of the initial configuration tasks on the network node.
In the /etc/neutron/neutron.conf
file, make the following changes:
[database]
section:[default]
section:rabbit_host = OSControllerNode rabbit_password = rabb1tmqpass auth_strategy = keystone core_plugin = ml2 service_plugins = router allow_overlapping_ips = True verbose = True
[keystone_authtoken]
section:auth_uri = http://OSControllerNode:5000/v2.0 identity_uri = http://OSControllerNode:35357 admin_tenant_name = service admin_user = neutron admin_password = n3utronpwd
In the /etc/neutron/plugins/ml2/ml2_conf.ini
file, make the following changes.
[ml2]
section, enable GRE and flat networking as follows:type_drivers = flat,gre tenant_network_types = gre mechanism_drivers = openvswitch
[ml2_type_gre]
section, enable the tunnel ID ranges; these don't need to be in the physical network, as they will only be between the compute and network node:tunnel_id_ranges = 1:1000
[securitygroup]
section, make the following changes:enable_security_group = True enable_ipset = True firewall_driver= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ml2_type_flat]
section, make this change:flat_networks = external
[ovs]
section for OVS (set the interface address for GRE), make these changes:local_ip = 10.0.0.1 enable_tunneling = True bridge_mappings = external:br-ex
[agent]
section, enable the GRE tunnels:tunnel_types = gre
We will configure three agents for Neutron: the Layer 3 agent for routing, the DHCP agent to provide DHCP services to the compute nodes, and the metadata agent, which will push the metadata on to the compute nodes.
The Layer 3 agent will provide the routing services; we only have to provide the external bridge name. Open and edit the /etc/neutron/l3_agent.ini
file as follows:
[default]
section, edit the following:interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = True external_network_bridge = br-ex router_delete_namespaces = True verbose = True
This is configured by the /etc/neutron/dhcp_agent.ini
file:
[default]
section, make these changes:interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq use_namespaces = True dhcp_delete_namespaces = True verbose = True
We can also configure other settings using the masquerading file. We can set any DHCP option; for example, it is recommended that we lower the MTU of the interface by 46 bytes so there will be no fragmentation. If we have jumbo frame support, then this step may not be required.
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
We just add the preceding line in the dhcp_agent.ini
file and then create a new file called /etc/neutron/dnsmasq-neutron.conf
and the line:
dhcp-option-force = 26,1454
This sets the DHCP option 26 (MTU setting) to 1454. Please note that, if the operating system doesn't honor this setting, it will not have any impact.
We finally configure the metadata agent using the etc/neutron/metadata_agent.ini
file as follows:
[default]
section:auth_url = http://OSControllerNode:5000/v2.0 auth_region = dataCenterOne admin_tenant_name = service admin_user = neutron admin_password = n3utronpwd nova_metadata_ip = OSControllerNode metadata_proxy_shared_secret = m3tadatapwd verbose = True
The entire Neutron configuration is based on the OVS. We need to create a bridge (br-ex
) that will point to the external network.
service openvswitch-switch restart ovs-vsctl add-br br-ex
We now need to add the interface to the external bridge. This interface should be the one pointing to the external physical network world. In my case, it is eth2
. Modify it to reflect the right interface name in your environment:
ovs-vsctl add-port br-ex eth2
This adds the external network to the bridge that OpenStack knows as br-ex
.
Verify that the interface is added by executing the following command:
ovs-vsctl show
You will receive confirmation as shown in the following screenshot:
We need to install and configure the Neutron plugins, and these need to be done on every compute node so that the system can communicate with the network node. We now go back to our familiar checklist:
Name |
Info |
---|---|
Access to the Internet |
Yes |
Proxy needed |
No |
Proxy IP and port |
Not applicable |
Node name |
|
Node IP address |
172.22.6.97 – Mgmt network 10.0.0.5– Tunnel interface |
Node OS |
Ubuntu 14.04.1 LTS |
Neutron DB password |
|
Neutron Keystone password |
|
Neutron port |
9696 |
Nova Keystone password |
|
Rabbit MQ password |
|
We start by editing the /etc/sysctl.conf
file to ensure the following parameters are set:
net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.default.rp_filter=0
This disables the Reverse Path (rp) filter so that the kernel doesn't perform the source validation. The kernel will start dropping the packets if we don't set this, as sometimes the packets may actually be destined to another IP address. In order to ensure that the changes take effect, reload the system control:
sysctl –p
Install the ML2 plugin and OVS agent by running the following command:
sudo apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent
After this, start OVS by running the following command:
sudo service openvswitch-switch restart
Now, let us look at some of the initial configuration tasks on the compute node.
In the /etc/neutron/neutron.conf
file, make the following changes:
[database]
section:[default]
section:rabbit_host = OSControllerNode rabbit_password = rabb1tmqpass auth_strategy = keystone core_plugin = ml2 service_plugins = router allow_overlapping_ips = True verbose = True
[keystone_authtoken]
section:auth_uri = http://OSControllerNode:5000/v2.0 identity_uri = http://OSControllerNode:35357 admin_tenant_name = service admin_user = neutron admin_password = n3utronpwd
In the /etc/neutron/plugins/ml2/ml2_conf.ini
file, make the following changes:
[ml2]
sections, enable GRE and flat networking:type_drivers = flat,gre tenant_network_types = gre mechanism_drivers = openvswitch
[ml2_type_gre]
section, enable the tunnel ID ranges; these don't need to be in the physical network as they will only be between the compute and network nodes:tunnel_id_ranges = 1:1000
[securitygroup]
section, change the following:enable_security_group = True enable_ipset = True firewall_driver= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
section for OVS (set the interface address for GRE), set the following parameters:local_ip = 10.0.0.5 enable_tunneling = True
[agent]
section, enable the GRE tunnels:tunnel_types = gre
Edit the /etc/nova/nova.conf
file and make the following changes:
[default]
section, set the different drivers:network_api_class = nova.network.neutronv2.api.API security_group_api = neutron linuxnet_interface_driver= nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver
[neutron]
section, make these changes:url = http://OSControllerNode:9696 auth_strategy = keystone admin_auth_url = http://OSControllerNode:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = n3utronpwd
Finally, restart all the components that we installed on the compute node:
sudo service nova-compute restart sudo service neutron-plugin-openvswitch-agent restart
This concludes the installation of Neutron in our OpenStack environment.
Validate the installation in the same way as we did for the Network node. On the controller, after exporting the credentials, execute the following command:
neutron agent-list
We already had four entries from our installation on the network node, but now you can see an entry has been created for the compute node as well: