Installing Neutron

The installation of this service takes place over three nodes: the controller node, the network node, and the compute node.

  • The controller node will have the Neutron server component
  • The compute nodes will have the L2 agents installed
  • The Network node will have the Layer 3 agents installed

Installing on the controller node

The same installation procedure that we have seen in the previous services will be followed on the controller node. We will carry out the following steps:

  1. Create a database.
  2. Install the services.

After this, we will initially configure the system, and perform the following steps:

  1. Create a Keystone user, service, and endpoints.
  2. Modify configuration files.

As usual, let's create our checklist to ensure we have all the prerequisite data beforehand:

Name

Info

Access to the Internet

Yes

Proxy Needed

No

Proxy IP and port

Not applicable

Node name

OSControllerNode

Node IP address

172.22.6.95

Node OS

Ubuntu 14.04.1 LTS

Neutron DB password

n3utron

Neutron Keystone password

n3utronpwd

Neutron port

9696

Nova Keystone password

n0vakeypwd (From the previous chapter)

Rabbit MQ password

rabb1tmqpass

Metadata password

m3tadatapwd

Creating the database

We create a blank database after logging in to the MySQL server:

mysql –u root –p

Enter the password: dbr00tpassword

Once in the database, execute the following command:

create database neutron;

This will create an empty database called neutron. Let us now set up the Nova database user credentials as we did earlier:

GRANT ALL PRIVILEGES ON nova.* TO 'neutron'@'localhost' IDENTIFIED BY 'n3utron';
GRANT ALL PRIVILEGES ON nova.* TO 'neutron'@'%' IDENTIFIED BY 'n3utron';

This allows the username neutron, using our password, to access the database called neutron.

Installing Neutron control components

The Neutron control components are installed using the aptitude package manager by using the following command:

sudo apt-get install neutron-server neutron-plugin-ml2 python-neutronclient

Ensure that these are installed successfully.

Initial configuration

Now, let's look at some of the initial configuration tasks on the controller node.

Creating the Neutron user in Keystone

Create the user in Keystone; by now, you will be familiar with exporting the credentials in order to use the different OpenStack command line utilities:

keystone user-create --name neutron --pass n3utronpwd

You should see something like the following screenshot:

Creating the Neutron user in Keystone

Then assign the admin role to the user by running the following command:

keystone user-role-add --user neutron --tenant service --role admin

Creating the Neutron service in Keystone

The Neutron service is created using the following command:

keystone service-create --name neutron --type network --description "OpenStack Networking"

The service will look as follows:

Creating the Neutron service in Keystone

We will have to note the id of the service, which we will use in the next section. In our case the id is 73376c096f154179a293a83a22cce643.

Creating the Neutron endpoint in Keystone

The endpoint is created using the following command, where you replace the ID with the ID you received during service creation:

keystone endpoint-create 
  --service-id 73376c096f154179a293a83a22cce643
  --publicurl http://OSControllerNode:9696 
  --internalurl http://OSControllerNode:9696 
  --adminurl http://OSControllerNode:9696 
  --region dataCenterOne

The following is the output:

Creating the Neutron endpoint in Keystone

Modifying the configuration files

On the controller node, we have a few files to modify:

  • /etc/neutron/neutron.conf: This file is used to configure Neutron
  • /etc/neutron/plugins/ml2/ml2_conf.ini: This file helps configure the ML2 plugin
  • /etc/nova/nova.conf: This allows Nova to use Neutron rather than the default Nova networking

Let's view one file at a time. However before we proceed, we will need the ID of the service tenant that we created. We can obtain it by using the Keystone command keystone tenant-list and picking the ID of the service tenant.

We will have to export the variables (or source the file, as we have done in the past) as shown in the following screenshot:

Modifying the configuration files

So, the service tenant ID is 8067841bed8547b0a21459ff4c8d58f7. This will be different for you, so substitute this in the following configuration.

In the etc/neutron/neutron.conf file, make the following changes:

  • Under the [database] section:
    connection = mysql://neutron:n3utron@OSControllerNode/neutron
    
  • Under the [default] section:
    rpc_backend = rabbit
    rabbit_host = OSControllerNode
    rabbit_password = rabb1tmqpass
    auth_strategy = keystone
    core_plugin = ml2
    service_plugins = router
    allow_overlapping_ips = True
    notify_nova_on_port_status_changes = True
    notify_nova_on_port_data_changes = True
    nova_url = http://OSControllerNode:8774/v2
    nova_admin_auth_url = http://OSControllerNode:35357/v2.0
    nova_region_name = dataCenterOne
    nova_admin_username = nova
    nova_admin_tenant_id = 8067841bed8547b0a21459ff4c8d58f7
    nova_admin_password = n0vakeypwd
    verbose = True
    
  • Under the [keystone_authtoken] section:
    auth_uri = http://OSControllerNode:5000/v2.0
    identity_uri = http://OSControllerNode:35357
    admin_tenant_name = service
    admin_user = neutron
    admin_password = n3utronpwd
    

These changes need to be done in the Neutron configuration.

Now, in the /etc/neutron/plugins/ml2/ml2_conf.ini file, make the following changes:

  • In the [ml2] section, enable the GRE and flat networking as follows:
    type_drivers = flat,gre
    tenant_network_types = gre
    mechanism_drivers = openvswitch
    
  • In the [ml2_type_gre] section, enable the tunnel ID ranges—these don't need to be in the physical network, as they will only be between the compute and network node:
    tunnel_id_ranges = 1:1000
    
  • In the [securitygroup] section, make the following changes:
    enable_security_group = True
    enable_ipset = True
    firewall_driver= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    

    Tip

    The next set of changes is to be made in the /etc/nova/nova.conf file.

  • In the [default] section, set the following:
    network_api_class = nova.network.neutronv2.api.API
    security_group_api = neutron
    linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
  • In the [neutron] section, enable the metadata-related configuration, which will be enabled on the network node:
    url = http://OSControllerNode:9696
    auth_strategy = keystone
    admin_auth_url = http://OSControllerNode:35357/v2.0
    admin_tenant_name = service
    admin_username = neutron
    admin_password = n3utronpwd
    service_metadata_proxy = True
    metadata_proxy_shared_secret = m3tadatapwd
    

This will enable Neutron for Nova in line with the configuration.

Setting up the database

The Neutron database can now be populated by running the following command as root user:

/bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf  --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron

Note

Ensure that the command does not result in an error. If it does result in an error, check if the configuration settings have been changed according to the preceding section.

Finalizing the installation

To finalize the installation, delete the SQLite database that comes along with the Ubuntu packages:

rm –rf /var/lib/neutron/neutron.sqlite

Restart all the services, including Nova and Neutron, by running the following commands:

service nova-api restart
service nova-scheduler restart
service nova-conductor restart
service neutron-server restart

At this point, the installation is complete on the controller node.

Validating the installation

In order to validate the installation, let's execute some Neutron commands:

neutron ext-list
neutron agent-list

Note

Make sure that the commands do not throw up any error.

Installing on the network node

The network node is the fourth and final node that we will be using for our setup. This node needs to have at least three network cards—you may recall this from the architecture design that we saw in Chapter 1, An Introduction to OpenStack:

  • Management network
  • External network
  • Tunnel network

The roles of these networks are fairly straightforward: the management network installs and manages the nodes; the external network provides access to the network; and the tunnel network is used to tunnel traffic between the compute nodes and the network nodes.

Before we begin working, let's prepare our checklist so that we have all the information about the system handy, as follows:

Name

Info

Access to the Internet

Yes

Proxy needed

No

Proxy IP and port

Not applicable

Node name

OSNetworkNode

Node IP address

172.22.6.98 – Mgmt network

10.0.0.1 – Tunnel interface

Node OS

Ubuntu 14.04.1 LTS

Neutron DB password

n3utron

Neutron Keystone password

n3utronpwd

Neutron port

9696

Nova Keystone password

n0vakeypwd (From the previous chapter)

Rabbit MQ password

rabb1tmqpass

Metadata password

m3tadatapwd

External interface

eth2

Setting up the prerequisites

On the network node, we will need to make some changes, which will help us set up network forwarding before we go into the actual installation.

We start by editing the /etc/sysctl.conf file to ensure the following parameters are set:

  • net.ipv4.ip_forward=1
  • net.ipv4.conf.all.rp_filter=0
  • net.ipv4.conf.default.rp_filter=0

These lines are needed on the network node in order for it to be able to forward packets from one interface to the other, essentially behaving like a router. We also disable the Reverse Path (rp) filter so that the kernel doesn't do the source validation; this is required so that we can use the elastic IPs. In order to ensure that the changes take effect, reload the system control as follows:

sudo sysctl –p

This command reloads the sysctl.conf file that we modified.

Installing Neutron packages

We will install the Neutron packages for the ML2 plugin and the OVS agents along with Layer 3 and DHCP agents:

sudo apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent neutron-l3-agent neutron-dhcp-agent

Note

Depending on your Internet speed, this might take some time.

Once the packages are installed, we will configure them. The Neutron configuration is similar to that on the controller node; the ML2 Plugin, DHCP agent, and the L3 agents are additional. We will also configure the metadata agent, which will be used to push the metadata on all the compute nodes when they come up.

Initial configuration on the network node

Now, let us look at some of the initial configuration tasks on the network node.

Neutron configuration

In the /etc/neutron/neutron.conf file, make the following changes:

  • Under the [database] section:
    • Remove any connection string that is present, as the database access is not needed directly by the network node
  • Under the [default] section:
    rabbit_host = OSControllerNode
    rabbit_password = rabb1tmqpass
    auth_strategy = keystone
    core_plugin = ml2
    service_plugins = router
    allow_overlapping_ips = True
    verbose = True
    
  • Under the [keystone_authtoken] section:
    auth_uri = http://OSControllerNode:5000/v2.0
    identity_uri = http://OSControllerNode:35357
    admin_tenant_name = service
    admin_user = neutron
    admin_password = n3utronpwd
    

ML2 plugin

In the /etc/neutron/plugins/ml2/ml2_conf.ini file, make the following changes.

  • In the [ml2] section, enable GRE and flat networking as follows:
    type_drivers = flat,gre
    tenant_network_types = gre
    mechanism_drivers = openvswitch
    
  • In the [ml2_type_gre] section, enable the tunnel ID ranges; these don't need to be in the physical network, as they will only be between the compute and network node:
    • tunnel_id_ranges = 1:1000
  • In the [securitygroup] section, make the following changes:
    enable_security_group = True
    enable_ipset = True
    firewall_driver= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    
  • In the [ml2_type_flat] section, make this change:
    flat_networks = external
    
  • Under the [ovs] section for OVS (set the interface address for GRE), make these changes:
    local_ip = 10.0.0.1
    enable_tunneling = True
    bridge_mappings = external:br-ex
    
  • Under the [agent] section, enable the GRE tunnels:
    tunnel_types = gre
    

Configuring agents

We will configure three agents for Neutron: the Layer 3 agent for routing, the DHCP agent to provide DHCP services to the compute nodes, and the metadata agent, which will push the metadata on to the compute nodes.

Layer 3 agent

The Layer 3 agent will provide the routing services; we only have to provide the external bridge name. Open and edit the /etc/neutron/l3_agent.ini file as follows:

  • In the [default] section, edit the following:
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
    use_namespaces = True
    external_network_bridge = br-ex
    router_delete_namespaces = True
    verbose = True
    
Layer 3 agent

This is configured by the /etc/neutron/dhcp_agent.ini file:

  • Under the [default] section, make these changes:
    interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
    dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
    use_namespaces = True
    dhcp_delete_namespaces = True
    verbose = True
    

We can also configure other settings using the masquerading file. We can set any DHCP option; for example, it is recommended that we lower the MTU of the interface by 46 bytes so there will be no fragmentation. If we have jumbo frame support, then this step may not be required.

dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf

We just add the preceding line in the dhcp_agent.ini file and then create a new file called /etc/neutron/dnsmasq-neutron.conf and the line:

dhcp-option-force = 26,1454

This sets the DHCP option 26 (MTU setting) to 1454. Please note that, if the operating system doesn't honor this setting, it will not have any impact.

Configuring the metadata agent

We finally configure the metadata agent using the etc/neutron/metadata_agent.ini file as follows:

  • Under the [default] section:
    auth_url = http://OSControllerNode:5000/v2.0
    auth_region = dataCenterOne
    admin_tenant_name = service
    admin_user = neutron
    admin_password = n3utronpwd
    nova_metadata_ip = OSControllerNode
    metadata_proxy_shared_secret = m3tadatapwd
    verbose = True
    

Note

Please remember that the metadata shared password should be the same on both the controller node configuration that we did and on the network node.

Setting up OVS

The entire Neutron configuration is based on the OVS. We need to create a bridge (br-ex) that will point to the external network.

service openvswitch-switch restart
ovs-vsctl add-br br-ex

We now need to add the interface to the external bridge. This interface should be the one pointing to the external physical network world. In my case, it is eth2. Modify it to reflect the right interface name in your environment:

ovs-vsctl add-port br-ex eth2

This adds the external network to the bridge that OpenStack knows as br-ex.

Verify that the interface is added by executing the following command:

ovs-vsctl show

You will receive confirmation as shown in the following screenshot:

Setting up OVS

Finalizing the installation

We now need to restart all the services we have installed by running the following commands:

sudo service neutron-plugin-openvswitch-agent restart
sudo service neutron-l3-agent restart
sudo service neutron-dhcp-agent restart
sudo service neutron-metadata-agent restart

Validating the installation

On the controller node, after exploring the environment variables for authentication, execute the following command:

neutron agent-list

You will see the four agents we configured on the network node, as in the following screenshot:

Validating the installation

Installing on the compute node

We need to install and configure the Neutron plugins, and these need to be done on every compute node so that the system can communicate with the network node. We now go back to our familiar checklist:

Name

Info

Access to the Internet

Yes

Proxy needed

No

Proxy IP and port

Not applicable

Node name

OSComputeNode

Node IP address

172.22.6.97 – Mgmt network

10.0.0.5– Tunnel interface

Node OS

Ubuntu 14.04.1 LTS

Neutron DB password

n3utron

Neutron Keystone password

n3utronpwd

Neutron port

9696

Nova Keystone password

n0vakeypwd (from the previous chapter)

Rabbit MQ password

rabb1tmqpass

Setting up the prerequisites

We start by editing the /etc/sysctl.conf file to ensure the following parameters are set:

net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0

This disables the Reverse Path (rp) filter so that the kernel doesn't perform the source validation. The kernel will start dropping the packets if we don't set this, as sometimes the packets may actually be destined to another IP address. In order to ensure that the changes take effect, reload the system control:

sysctl –p

This command reloads the sysctl.conf file that we modified.

Installing packages

Install the ML2 plugin and OVS agent by running the following command:

sudo apt-get install neutron-plugin-ml2 neutron-plugin-openvswitch-agent

Note

Ensure that the components are installed successfully.

After this, start OVS by running the following command:

sudo service openvswitch-switch restart

Initial configuration

Now, let us look at some of the initial configuration tasks on the compute node.

Neutron configuration

In the /etc/neutron/neutron.conf file, make the following changes:

  • Under the [database] section:
    • Remove any connection string that is present, as the database access is not needed directly by the network node
    • Under the [default] section:
      rabbit_host = OSControllerNode
      rabbit_password = rabb1tmqpass
      auth_strategy = keystone
      core_plugin = ml2
      service_plugins = router
      allow_overlapping_ips = True
      verbose = True
      
    • Under the [keystone_authtoken] section:
      auth_uri = http://OSControllerNode:5000/v2.0
      identity_uri = http://OSControllerNode:35357
      admin_tenant_name = service
      admin_user = neutron
      admin_password = n3utronpwd
      

ML2 plugin

In the /etc/neutron/plugins/ml2/ml2_conf.ini file, make the following changes:

  • In the [ml2] sections, enable GRE and flat networking:
    type_drivers = flat,gre
    tenant_network_types = gre
    mechanism_drivers = openvswitch
    
  • In the [ml2_type_gre] section, enable the tunnel ID ranges; these don't need to be in the physical network as they will only be between the compute and network nodes:
    tunnel_id_ranges = 1:1000
    
  • In the [securitygroup] section, change the following:
    enable_security_group = True
    enable_ipset = True
    firewall_driver= neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    
  • Under the [ovs] section for OVS (set the interface address for GRE), set the following parameters:
    local_ip = 10.0.0.5
    enable_tunneling = True
    
  • Under the [agent] section, enable the GRE tunnels:
    tunnel_types = gre
    

Nova configuration

Edit the /etc/nova/nova.conf file and make the following changes:

  • In the [default] section, set the different drivers:
    network_api_class = nova.network.neutronv2.api.API
    security_group_api = neutron
    linuxnet_interface_driver= nova.network.linux_net.LinuxOVSInterfaceDriver
    firewall_driver = nova.virt.firewall.NoopFirewallDriver
    
  • In the [neutron] section, make these changes:
    url = http://OSControllerNode:9696
    auth_strategy = keystone
    admin_auth_url = http://OSControllerNode:35357/v2.0
    admin_tenant_name = service
    admin_username = neutron
    admin_password = n3utronpwd
    

Finalizing the installation

Finally, restart all the components that we installed on the compute node:

sudo service nova-compute restart
sudo service neutron-plugin-openvswitch-agent restart

This concludes the installation of Neutron in our OpenStack environment.

Validating the installation

Validate the installation in the same way as we did for the Network node. On the controller, after exporting the credentials, execute the following command:

neutron agent-list

We already had four entries from our installation on the network node, but now you can see an entry has been created for the compute node as well:

Validating the installation
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset