Exposing services through a load balancer

Another way to isolate your containers is to frontend them with a load balancer. This mode of operation offers several advantages. First, the load balancer can provide intelligent load balancing to multiple backend nodes. If a container dies, the load balancer can remove it from the load balancing pool. Second, you're effectively hiding your containers behind a load balancing Virtual IP (VIP) address. Clients believe that they are interacting directly with the application running in the container while they are actually interacting with the load balancer. In many cases, a load balancer can provide or offload security features, such as SSL and web application firewall that make scaling a container-based application easier to accomplish in a secure fashion. In this recipe, we'll learn how this can be done and some of the features available in Docker that make this easier to do.

Getting ready

We'll be using multiple Docker hosts in the following examples. We'll also be using a user-defined overlay network. It will be assumed that you know how to configure the Docker hosts for overlay networking. If you do not, please see the Creating a user-defined overlay network recipe in Chapter 3, User-Defined Networks.

How to do it…

Load balancing is not a new concept and is one that is well-understood in the physical and virtual machine space. However, load balancing with containers adds in an extra layer of complexity, which can make things drastically more complicated. To start with, let's look how load balancing typically works without containers:

How to do it…

In this case, we have a simple load balancer configuration where the load balancer is providing VIP for a single backend pool member (192.168.50.150). The flow works like this:

  • The client generates a request toward the VIP (10.10.10.150) hosted on the load balancer
  • The load balancer receives the request, ensures that it has VIP for that IP, and then generates a request to the backend pool member(s) for that VIP on behalf of the client
  • The server receives the request sourced from the load balancer and responds directly back to the load balancer
  • The load balancer then responds back to the client

In most cases, the conversation involves two distinct sessions, one between the client and the load balancer and another between the load balancer and the server. Each of these is a distinct TCP session.

Now, let's show an example of how this might work in the container space. Examine the topology shown in the following figure:

How to do it…

In this example, we'll be using both container-based application servers as backend pool members as well as a container-based load balancer. Let's make the following assumptions:

  • The hosts docker2 and docker3 will provide hosting for many different web presentation containers that support many different VIPs
  • We will use one load balancer container (haproxy instance) for each VIP we wish to define
  • Each presentation server exposes port 80

Given this, we can assume that host network mode is out of the question for both the load balancer host (docker1) as well as the hosting hosts (docker2 and docker3) since it would require containers exposing services on a large number of ports. Before the introduction of user-defined networks, this would leave us with having to deal with port mapping on the docker0 bridge.

That would quickly become a problem both to manage as well as troubleshoot. For instance, the topology might really look like this:

How to do it…

In this case, the load balancer VIP would be a published port on the host docker1, that is, 32769. The web servers themselves are also publishing ports to expose their web servers. Let's walk through what a load balancing request might look like:

  • A client from the outside network generates a request to http://docker1.lab.lab:32769.
  • The docker1 host receives the request and translates the packet through the published port on the haproxy container. This changes the destination IP and port to 172.17.0.2:80.
  • The haproxy container receives the request and determines that the VIP being accessed has a backend pool containing docker2:23770 and docker3:32771. It selects the docker3 host for this session and sends a request towards docker3:32771.
  • As the request traverses the host docker1, it performs an outbound MASQUERADE and hides the container behind the host's IP interface.
  • The request is sent to the host's default gateway (the MLS), which, in turn, forwards the request down to the host docker3.
  • The docker3 host receives the request and translates the packet through the published port on the web2 container. This changes the destination IP and port to 172.17.0.3:80.
  • The web2 container receives the request and responds back toward docker1
  • The docker3 host receives the reply and translates the packet back through the inbound published port.
  • The request is received at docker1 translated back through the outbound MASQUERADE, and is delivered at the haproxy container.
  • The haproxy container then responds back to the client. The docker1 host translates the haproxy container's response back to its own IP address on port 32769 and the response makes its way back to the client.

While doable, it's a lot to keep track of. In addition, the load balancer node needs to be aware of the published port and IP address of each backend container. If a container gets restarted, the published port can change effectively making it unreachable. Troubleshooting this with a large backend pool would be a headache as well.

So while this is certainly doable, the introduction of user-defined networks can make this much more manageable. For instance, we could leverage an overlay type network for the backend pool members and completely remove the need for much of the port publishing and outbound masquerading. That topology would look more like this:

How to do it…

Let's see what it would take to build this kind of configuration. The first thing we need to do is to define a user-defined overlay type network on one of the nodes. We'll define it on docker1 and call it presentation_backend:

user@docker1:~$ docker network create -d overlay 
--internal presentation_backend
bd9e9b5b5e064aee2ddaa58507fa6c15f49e4b0a28ea58ffb3da4cc63e6f8908
user@docker1:~$

Note

Note that how I passed the --internal flag when I created this network. You'll recall from Chapter 3, User-Defined Networks, that this means that only containers connected to this network will be able to access it.

The next thing we want to do is to create the two web containers which will serve as the backend pool members for the load balancer. We'll do that on hosts docker2 and docker3:

user@docker2:~$ docker run -dP --name=web1 --net 
presentation_backend jonlangemak/web_server_1
6cc8862f5288b14e84a0dd9ff5424a3988de52da5ef6a07ae593c9621baf2202
user@docker2:~$
user@docker3:~$ docker run -dP --name=web2 --net 
presentation_backend jonlangemak/web_server_2
e2504f08f234220dd6b14424d51bfc0cd4d065f75fcbaf46c7b6dece96676d46
user@docker3:~$

The remaining component left to deploy is the load balancer. As mentioned, haproxy has a container image of their load balancer, so we'll use that for this example. Before we run the container we need to come up with a configuration that we can pass into the container for haproxy to use. This is done through mounting a volume into the container as we'll see shortly. The configuration file is named haproxy.cfg and my example configuration looks like this:

global
    log 127.0.0.1   local0
defaults
    log     global
    mode    http
    option  httplog
    timeout connect 5000
    timeout client 50000
    timeout server 50000
    stats enable
    stats auth user:docker
    stats uri /lbstats
frontend all
    bind *:80
    use_backend pres_containers

backend pres_containers
    balance roundrobin
    server web1 web1:80 check
    server web2 web2:80 check
    option httpchk HEAD /index.html HTTP/1.0

A couple of items are worth pointing out in the preceding configuration:

  • We bind the haproxy service to all interfaces on port 80
  • Any request hitting the container on port 80 will get load balanced to a pool named pres_containers
  • The pres_containers pool load balances in a round-robin method between two servers:
    • web1 on port 80
    • web2 on port 80

One of the interesting items here is that we can define the pool members by name. This is a huge advantage that comes along with user-defined networks and means that we don't need to worry about tracking container IP addressing.

I put this config file in a folder in my home directory named haproxy:

user@docker1:~/haproxy$ pwd
/home/user/haproxy
user@docker1:~/haproxy$ ls
haproxy.cfg
user@docker1:~/haproxy$

Once the configuration file is in pace, we can run the container as follows:

user@docker1:~$ docker run -d --name haproxy --net 
presentation_backend -p 80:80 -v 
~/haproxy:/usr/local/etc/haproxy/ haproxy
d34667aa1118c70cd333810d9c8adf0986d58dab9d71630d68e6e15816741d2b
user@docker1:~$

You might be wondering why I'm specifying a port mapping when connecting the container to an internal type network. Recall from earlier chapters that port mappings are global across all network types. In other words, even though I'm not using it currently, it's still a characteristic of the container. So if I ever connect a network type to the container that can use the port mapping, it will. In this case, I first need to connect the container to the overlay network to ensure that it has reachability to the backend web servers. If the haproxy container is unable to resolve the pool member names when it starts, it will fail to load.

At this point, the haproxy container has reachability to its pool members, but we have no way to access the haproxy container externally. To do that, we'll connect another interface to the container that can use the port mapping. In this case, that will be the docker0 bridge:

user@docker1:~$ docker network connect bridge haproxy
user@docker1:~

At this point, the haproxy container should be available externally at the following URLs:

  • Load balanced VIP: http://docker1.lab.lab
  • HAProxy stats: http://docker1.lab.lab/lbstats

If we check the stats page, we should see that the haproxy container can reach each backend web server across the overlay. We can see that the health check for each is coming back with a 200 OK status:

How to do it…

Now if we check VIP itself and hit refresh a couple of times, we should see the web page presented from each container:

How to do it…

This type of topology provides us several notable advantages over the first concept we had around container load balancing. The use of the overlay-based network not only provided name-based resolution of containers but also significantly reduced the complexity of the traffic path. Granted, the traffic took the same physical path in either case, but we didn't need to rely on so many different NATs for the traffic to work. It also made the entire solution far more dynamic. This type of design can be easily replicated to provide load balancing for many different backend overlay networks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset