Integrating Flannel with Docker

As we mentioned earlier, there is currently no direct integration between Flannel and Docker. That being said, we'll need to find a way to get the containers onto the Flannel network without Docker directly knowing that's what's happening. In this recipe, we'll show how this is done, discuss some of the perquisites that led to our current configuration, and see how Flannel handles host-to-host communication.

Getting ready

It is assumed that you're building off the lab described in the previous recipe. In some cases the changes we make may require you to have root-level access to the system.

How to do it…

In the previous recipe, we configured Flannel, but we didn't examine what the Flannel configuration actually did from a network perspective. Let's take a quick look at the configuration of one of our Flannel-enabled hosts to see what's changed:

user@docker4:~$ ip addr
…<loopback interface removed for brevity>…
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether d2:fe:5e:b2:f6:43 brd ff:ff:ff:ff:ff:ff
    inet 192.168.50.102/24 brd 192.168.50.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d0fe:5eff:feb2:f643/64 scope link
       valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN group default qlen 500
    link/none
    inet 10.100.15.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:16:78:74:cf brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever 
user@docker4:~$

You'll note the addition of a new interface named flannel0. You'll also note that it has an IP address within the /24 local scope that was assigned to this host. If we dig a little deeper, we can use ethtool to determine that this interface is a virtual tun interface:

user@docker4:~$ ethtool -i flannel0
driver: tun
version: 1.6
firmware-version:
bus-info: tun
supports-statistics: no
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
user@docker4:~$

Flannel creates this interface on each host where the Flannel service is running. Note that the subnet mask of the flannel0 interface is a /16, which covers the entire global scope allocation we defined in etcd. Despite allocating the host a /24 scope, the host believes that the entire /16 is reachable through the flannel0 interface:

user@docker4:~$ ip route
default via 192.168.50.1 dev eth0
10.100.0.0/16 dev flannel0  proto kernel  scope link  src 10.100.93.0
172.17.0.0/16 dev docker0  proto kernel  scope link  src 172.17.0.1
192.168.50.0/24 dev eth0  proto kernel  scope link  src 192.168.50.102
user@docker4:~$

Having the interface present creates this route, which ensures that traffic headed to any of the assigned local scopes on other hosts goes through the flannel0 interface. We can prove that this works by pinging the other flannel0 interfaces on the other hosts:

user@docker4:~$ ping 10.100.93.0 -c 2
PING 10.100.93.0 (10.100.93.0) 56(84) bytes of data.
64 bytes from 10.100.93.0: icmp_seq=1 ttl=62 time=0.901 ms
64 bytes from 10.100.93.0: icmp_seq=2 ttl=62 time=0.930 ms
--- 10.100.93.0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.901/0.915/0.930/0.033 ms
user@docker4:~$

Since the physical network has no knowledge of the 10.100.0.0/16 network space, Flannel must encapsulate the traffic as it traverses the physical network. In order to do this, it needs to know what physical Docker host has a given scope assigned to it. Recall from the Flannel logs we examined in the previous recipe that Flannel chose an external interface for each host based on the host's default route:

I0707 09:07:01.733912 02195 main.go:130] Determining IP address of default interface
I0707 09:07:01.734374 02195 main.go:188] Using 192.168.50.102 as external interface

This information, along with the scope assigned to each host, is registered in the key-value store. Using this information, Flannel can determine which host has which scope assigned and can use the external interface of that host as a destination to send the encapsulated traffic towards.

Note

Flannel supports multiple backends or transport mechanisms. By default, it encapsulates traffic in UDP on port 8285. In the upcoming recipes, we'll discuss other backend options.

Now that we know how Flannel works, we need to sort out how to get the actual Docker containers onto the Flannel network. The easiest way to do this is to have Docker use the assigned scope as the subnet for the docker0 bridge. Flannel writes the scope information out to a file saved in /run/flannel/subnet.env:

user@docker4:~$ more /run/flannel/subnet.env
FLANNEL_NETWORK=10.100.0.0/16
FLANNEL_SUBNET=10.100.15.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=false
user@docker4:~$

Using this information, we can configure Docker to use the correct subnet for its bridge interface. Flannel offers two ways to do this. The first involves generating a new Docker configuration file using a script that was included along with the Flannel binary. The script allows you to output a new Docker configuration file that uses the information from the subnet.env file. For example, we can use the script to generate a new configuration as follows:

user@docker4:~$ cd /tmp
user@docker4:/tmp$ ls
flannel-v0.6.2-linux-amd64.tar.gz  mk-docker-opts.sh  README.md  
user@docker4:~/flannel-0.5.5$ ./mk-docker-opts.sh -c -d 
example_docker_config
user@docker4:/tmp$ more example_docker_config
DOCKER_OPTS=" --bip=10.100.15.1/24 --ip-masq=true --mtu=1472"
user@docker4:/tmp$

In systems that don't leverage systemd Docker will, in most cases, automatically check the file /etc/default/docker for service-level options. This means that we could simply have Flannel write the earlier-mentioned configuration file out to /etc/default/docker, which will allow Docker to consume the new settings when the service reloads. However, since our system uses systemd, this method would require updating our Docker drop-in file (/etc/systemd/system/docker.service.d/docker.conf) to look like this:

[Service]
EnvironmentFile=/etc/default/docker
ExecStart=
ExecStart=/usr/bin/dockerd $DOCKER_OPTS

The bolded lines indicate that the service should check the file etc/default/docker and then load the variable $DOCKER_OPTS to be passed to the service at runtime. If you use this method, it might be wise to define all your service-level options in etc/default/docker for the sake of simplicity.

Note

It should be noted that this first approach relies on running the script to generate the configuration file. If you are running the script manually to generate the file, there's a chance that the configuration file will get out of date if the Flannel configuration changes. The second approach shown later is more dynamic since the /run/flannel/subnet.env file is updated by the Flannel service.

Although the first approach certainly works, I prefer to use a slightly different method where I just load the variables from the /run/flannel/subnet.env file and consume them within the drop-in file. To do this, we change our Docker drop-in file to look like this:

[Service]
EnvironmentFile=/run/flannel/subnet.env
ExecStart=
ExecStart=/usr/bin/dockerd --bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}

By specifying /run/flannel/subnet.env as an EnvironmentFile, we make the variables defined in the file available for consumption within the service definition. Then, we just use them as options to pass to the service when it starts. If we make these changes on our Docker host, reload the systemd configuration, and restart the Docker service, we should see that our docker0 interface now reflects the Flannel subnet:

user@docker4:~$ ip addr show dev docker0
8: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:24:0a:e3:c8 brd ff:ff:ff:ff:ff:ff
    inet 10.100.15.1/24 scope global docker0
       valid_lft forever preferred_lft forever
user@docker4:~$ 

You can also manually update the Docker service-level parameters yourself based on the Flannel configuration. Just make sure that you use the information from the /run/flannel/subnet.env file. Regardless of which method you choose, make sure that the docker0 bridge is using the configuration specified by Flannel on all four of the Docker hosts. Our topology should now look like this:

How to do it…

Since each Docker host only uses the Flannel-assigned scope for its subnet, each host believes the remaining subnets included in the global Flannel network are still reachable through the flannel0 interface. Only the specific /24 for the assigned local scope is reachable through the docker0 bridge locally:

user@docker4:~$ ip route
default via 192.168.50.1 dev eth0 onlink
10.100.0.0/16 dev flannel0  proto kernel  scope link src 10.100.15.0
10.100.15.0/24 dev docker0  proto kernel  scope link src 10.100.15.1 
192.168.50.0/24 dev eth0  proto kernel  scope link src 192.168.50.102
user@docker4:~$

We can verify the operation of Flannel at this point by running two different containers on two different hosts:

user@docker1:~$ docker run -dP --name=web1 jonlangemak/web_server_1
7e44a55c7ea7704d97a8804bfa211344c66f9fb83b3ac17f697c504b3b193e2d
user@docker1:~$
user@docker4:~$ docker run -dP --name=web2 jonlangemak/web_server_2
39a47920588b5e0d77ca9d2838988e2d8de893dee6198759f9ddbd3b38cea80d
user@docker4:~$

We can now reach the services running on each container directly by IP address. First, find the IP address of one of the containers:

user@docker1:~$ docker exec -it web1 ip addr show dev eth0
12: eth0@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP
    link/ether 02:42:0a:64:5d:02 brd ff:ff:ff:ff:ff:ff
    inet 10.100.93.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe64:5d02/64 scope link
       valid_lft forever preferred_lft forever
user@docker1:~$

Then, access the service from the second container:

user@docker4:~$ docker exec -it web2 curl http://10.100.93.2
<body>
  <html>
    <h1><span style="color:#FF0000;font-size:72px;">Web Server #1 - Running on port 80</span>
    </h1>
</body>
  </html>
user@docker4:~$

Connectivity is working as expected. Now that we have the entire Flannel configuration working with Docker, it's important to call out the order in which we did things. Other solutions we've looked at were able to containerize certain pieces of their solution. For instance, Weave was able to offer their services in a container format rather than requiring local services as we did with Flannel. With Flannel, each component has a perquisite in order to work.

For instance, we need the etcd service running before Flannel will register. That by itself is not a huge concern and, if both etcd and Flannel ran in containers, you could solve that piece pretty easily. However, since the changes Docker needs to make to its bridge IP address are done at the service level, Docker needs to know about the Flannel scope before starting. This means that we can't run the etcd and Flannel services inside Docker containers because we can't start Docker without the information that Flannel generates based on reading keys from etcd. In this case, the prerequisites for each component are important to understand.

Note

When running Flannel in CoreOS, they are able to run these components in containers. The solution for this is detailed in their documentation at this line under the under the hood section:

https://coreos.com/flannel/docs/latest/flannel-config.html

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset