To demonstrate the creation and use of load balancers in Neutron, this next section is dedicated to building a functional load balancer based on the following scenario:
A tenant has a simple Neutron network set up with a router attached to both an external provider network and internal tenant network. The user would like to load balance HTTP traffic between two instances running a web server. Each instance is configured with an index.html page containing a unique server identifier.
To eliminate the installation and configuration of a web server for this example, you can mimic the behavior of one using the SimpleHTTPServer
Python module on the instances, as follows:
ubuntu@web1:~$ echo "This is Web1" > ~/index.html ubuntu@web1:~$ sudo python -m SimpleHTTPServer 80 Serving HTTP on 0.0.0.0 port 80 ...
Repeat the mentioned commands for the second instance, substituting Web2
for Web1
in the index.html
file.
The first step to building a functional load balancer is to create a pool. Using the Neutron lb-pool-create
command, create a pool with the following attributes:
WEB_POOL
Round robin
HTTP
<Subnet ID of the pool members>
The next step to building a functional load balancer is to create and associate pool members with the pool.
In this environment, there are two instances eligible for use in the pool:
Using the Neutron lb-member-create
command, create two pool members with the following attributes based on the nova list
output:
WEB1
10.30.0.7
80
WEB_POOL
WEB2
10.30.0.8
80
WEB_POOL
The following screenshot demonstrates the process of creating the first pool member:
Repeat the process shown in the preceding screenshot to create the second pool member.
The Neutron lb-member-list
command returns a list showing the two pool members but does not list their associated pools:
As a workaround, you can include certain columns to be returned, as shown in the following figure:
To provide high availability of an application to clients, it is recommended to create and apply a health monitor to a pool. Without a monitor, the load balancer will continue to send traffic to members that may not be available.
Using the Neutron lb-healthmonitor-create
command, create a health monitor with the following attributes:
5
3
4
TCP
To associate the newly created health monitor with the pool, use the lb-healthmonitor-associate
command, as follows:
lb-healthmonitor-associate HEALTH_MONITOR_ID POOL
Now, consider the following screenshot:
The last step in creating a function load balancer is to create the virtual IP, or VIP, which acts as a listener and balances traffic across pool members. Using the Neutron lb-vip-create
command, create a virtual IP with the following attributes:
WEB_VIP
80
HTTP
<Subnet ID of Pool>
WEB_POOL
Once the virtual IP is created, the state of the VIP and pool will change to ACTIVE
:
A listing of the network namespaces on the host running the LBaaS agent reveals a network namespace that corresponds to the load balancer just created:
The IP configuration within the namespace reveals an interface that corresponds to the subnet of the virtual IP:
Neutron creates an HAProxy configuration file specific to every load balancer created by users. The load balancer configuration files can be found in the /var/lib/neutron/lbaas/
directory of the host running the LBaaS agent.
The configuration file for this load balancer built by Neutron can be seen in the following screenshot:
From within the router namespace, confirm direct connectivity to WEB1
and WEB2
via their respective addresses over port 80
using curl
:
By opening multiple connections to the virtual IP 10.30.0.9
within the router namespace, you can observe a round robin load balancing in effect:
With round robin load balancing, every connection is evenly distributed among the two pool members.
A packet capture on WEB1
reveals that the load balancer is performing TCP checks its health every 5 seconds:
In the preceding output, the load balancer sends a TCP SYN
packet every 5 seconds and immediately sends a RST
upon receiving the SYN ACK
from the pool member.
To observe the monitor removing a pool member from eligibility, stop the web service on Web1
and observe the packet captures and logs:
In the preceding output, the web service is stopped and connections to port 80
are refused. Immediately following the third failure, the load balancer marks the pool member as DOWN
:
While WEB1
is down, all subsequent connections to the VIP are sent to WEB2
:
After restarting the web service on WEB1
, the load balancer places the server back in the pool upon the next successful health check:
To connect to a virtual IP externally, a floating IP must be associated with the VIP because the virtual IP exists within a subnet behind the router and is not reachable directly.
Using the Neutron floatingip-create
command, assign a floating IP to be used with the virtual IP:
A test from a workstation to the floating IP confirms external connectivity to the load balancer and its pool members: