Installing and configuring Flannel

In this recipe, we'll walk through the installation of Flannel. Flannel requires the installation of a key store and Flannel service. Due to the dependencies of each of these, they need to be configured as actual services on the Docker hosts. To do this, we'll leverage systemd unit files to define each respective service.

Getting ready

In this example, we'll be using the same lab topology we used in Chapter 3, User-Defined Networks, where we discussed user-defined overlay networks:

Getting ready

You'll need a couple of hosts, preferably with some of them being on different subnets. It is assumed that the Docker hosts used in this lab are in their default configuration. In some cases, the changes we make may require you to have root-level access to the system.

How to do it…

As mentioned, Flannel relies on a key-value store to provide information to all the nodes participating in the Flannel network. In other examples, we've run a container-based key-value store such as Consul to provide this functionality. Since Flannel was built by CoreOS, we'll be leveraging their key-value store named etcd. And while etcd is offered in a container format, we can't easily use the container-based version due to some of the prerequisites required for Flannel to work. That being said, we'll be downloading the binaries for both etcd and Flannel and running them as services on our hosts.

Let's start with etcd since it's a perquisite for Flannel. The first thing you need to do is download the code. In this example, we'll be leveraging etcd version 3.0.12 and running the key-value store on the host docker1. To download the binary, we'll run this command:

user@docker1:~$ curl -LO 
https://github.com/coreos/etcd/releases/download/v3.0.12/
etcd-v3.0.12-linux-amd64.tar.gz

Once downloaded, we can extract the binaries from the archive using this command:

user@docker1:~$ tar xzvf etcd-v3.0.12-linux-amd64.tar.gz

And then we can move the binaries we need into the correct location to make them executable. In this case, the location is /usr/bin and the binaries we want are the etcd service itself as well as its command-line tool named etcdctl:

user@docker1:~$ cd etcd-v3.0.12-linux-amd64
user@docker1:~/etcd-v2.3.7-linux-amd64$ sudo mv etcd /usr/bin/
user@docker1:~/etcd-v2.3.7-linux-amd64$ sudo mv etcdctl /usr/bin/

Now that we have all the pieces in place, the last thing we need to do is to create a service on the system that will take care of running etcd. Since our version of Ubuntu is using systemd, we'll need to create a unit file for the etcd service. To create the service definition, you can create a service unit file in the /lib/systemd/system/ directory:

user@docker1:~$  sudo vi /lib/systemd/system/etcd.service

Then, you can create a service definition to run etcd. An example unit file for the etcd service is shown as follows:

[Unit]
Description=etcd key-value store
Documentation=https://github.com/coreos/etcd
After=network.target

[Service]
Environment=DAEMON_ARGS=
Environment=ETCD_NAME=%H
Environment=ETCD_ADVERTISE_CLIENT_URLS=http://0.0.0.0:2379
Environment=ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
Environment=ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2378
Environment=ETCD_DATA_DIR=/var/lib/etcd/default
Type=notify
ExecStart=/usr/bin/etcd $DAEMON_ARGS
Restart=always
RestartSec=10s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Note

Keep in mind that systemd can be configured in many different ways based on your requirements. The unit file given earlier demonstrates one way to configure etcd as a service.

Once the unit file is in place, we can reload systemd and then enable and start the service:

user@docker1:~$ sudo systemctl daemon-reload
user@docker1:~$ sudo systemctl enable etcd
user@docker1:~$ sudo systemctl start etcd

If for some reason the service doesn't start or stay started, you can check the status of the service by using the systemctl status etcd command:

user@docker1:~$ systemctl status etcd
  etcd.service - etcd key-value store
   Loaded: loaded (/lib/systemd/system/etcd.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2016-10-11 13:41:01 CDT; 1h 30min ago
     Docs: https://github.com/coreos/etcd
 Main PID: 17486 (etcd)
    Tasks: 8
   Memory: 8.5M
      CPU: 22.095s
   CGroup: /system.slice/etcd.service
           └─17486 /usr/bin/etcd

Oct 11 13:41:01 docker1 etcd[17486]: setting up the initial cluster version to 3.0
Oct 11 13:41:01 docker1 etcd[17486]: published {Name:docker1 ClientURLs:[http://0.0.0.0:2379]} to cluster cdf818194e3a8c32
Oct 11 13:41:01 docker1 etcd[17486]: ready to serve client requests
Oct 11 13:41:01 docker1 etcd[17486]: serving insecure client requests on 0.0.0.0:2379, this is strongly  iscouraged!
Oct 11 13:41:01 docker1 systemd[1]: Started etcd key-value store.
Oct 11 13:41:01 docker1 etcd[17486]: set the initial cluster version to 3.0
Oct 11 13:41:01 docker1 etcd[17486]: enabled capabilities for version 3.0
Oct 11 15:04:20 docker1 etcd[17486]: start to snapshot (applied: 10001, lastsnap: 0)
Oct 11 15:04:20 docker1 etcd[17486]: saved snapshot at index 10001
Oct 11 15:04:20 docker1 etcd[17486]: compacted raft log at 5001
user@docker1:~$

Later on, if you're having issues with Flannel-enabled nodes being able to talk to etcd, check and make sure that etcd is allowing access on all interfaces (0.0.0.0) as shown in the preceding bolded output. This is defined in the sample unit file provided, but if not defined, etcd will default to only listen on the local loopback interface (127.0.0.1). This will prevent remote servers from accessing the service.

Note

Since the key-value store configuration is being done explicitly to demonstrate Flannel, we won't be covering the basics of key-value stores. These configuration options are enough to get you up and running on a single node and are not intended to be used in a production environment. Please make sure that you understand how etcd works before using it in a production setting.

Once the etcd service is started, we can then use the etcdctl command-line tool to configure some of the base settings in Flannel:

user@docker1:~$ etcdctl mk /coreos.com/network/config 
'{"Network":"10.100.0.0/16"}'

We'll discuss these configuration options in a later recipe, but for now, just know that the subnet we defined as the Network parameter defines the Flannel global scope.

Now that we have etcd configured, we can focus on configuring Flannel itself. The configuration of Flannel as a service on the system is very similar to what we just did for etcd. The major difference is that we'll be doing this same configuration on all four lab hosts, whereas the key-value store was only configured on a single host. We'll show the installation of Flannel on a single host, docker4, but you'll need to repeat these steps on each host in your lab environment that you wish to be a member of the Flannel network:

First, we'll download the Flannel binary. In this example, we'll be using version 0.5.5:

user@docker4:~$ cd /tmp/
user@docker4:/tmp$ curl -LO 
https://github.com/coreos/flannel/releases/download/v0.6.2/
flannel-v0.6.2-linux-amd64.tar.gz

Then, we need to extract the files from the archive and move the flanneld binary to the correct location. Note that there is no command-line tool to interact with Flannel as there was with etcd:

user@docker4:/tmp$ tar xzvf flannel-v0.6.2-linux-amd64.tar.gz
user@docker4:/tmp$ sudo mv flanneld /usr/bin/

As with etcd, we want to define a systemd unit file so that we can run flanneld as a service on each host. To create the service definition, you can create another service unit file in the /lib/systemd/system/ directory:

user@docker4:/tmp$ sudo vi /lib/systemd/system/flanneld.service

Then, you can create a service definition to run etcd. An example unit file for the etcd service is shown as follows:

[Unit]
Description=Flannel Network Fabric
Documentation=https://github.com/coreos/flannel
Before=docker.service
After=etcd.service

[Service]
Environment='DAEMON_ARGS=--etcd-endpoints=http://10.10.10.101:2379'
Type=notify
ExecStart=/usr/bin/flanneld $DAEMON_ARGS
Restart=always
RestartSec=10s
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Once the unit file is in pace, we can reload systemd and then enable and start the service:

user@docker4:/tmp$ sudo systemctl daemon-reload
user@docker4:/tmp$ sudo systemctl enable flanneld
user@docker4:/tmp$ sudo systemctl start flanneld

If, for some reason, the service doesn't start or stay started, you can check the status of the service using the systemctl status flanneld command:

user@docker4:/tmp$ systemctl status flanneld
  flanneld.service - Flannel Network Fabric
   Loaded: loaded (/lib/systemd/system/flanneld.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2016-10-12 08:50:54 CDT; 6s ago
     Docs: https://github.com/coreos/flannel
 Main PID: 25161 (flanneld)
    Tasks: 6
   Memory: 3.3M
      CPU: 12ms
   CGroup: /system.slice/flanneld.service
           └─25161 /usr/bin/flanneld --etcd-endpoints=http://10.10.10.101:2379

Oct 12 08:50:54 docker4 systemd[1]: Starting Flannel Network Fabric...
Oct 12 08:50:54 docker4 flanneld[25161]: I1012 08:50:54.409928 25161 main.go:126] Installing signal handlers
Oct 12 08:50:54 docker4 flanneld[25161]: I1012 08:50:54.410384 25161 manager.go:133] Determining IP address of default interface
Oct 12 08:50:54 docker4 flanneld[25161]: I1012 08:50:54.410793 25161 manager.go:163] Using 192.168.50.102 as external interface
Oct 12 08:50:54 docker4 flanneld[25161]: I1012 08:50:54.411688 25161 manager.go:164] Using 192.168.50.102 as external endpoint
Oct 12 08:50:54 docker4 flanneld[25161]: I1012 08:50:54.423706 25161 local_manager.go:179] Picking subnet in range 10.100.1.0 ... 10.100.255.0
Oct 12 08:50:54 docker4 flanneld[25161]: I1012 08:50:54.429636 25161 manager.go:246] Lease acquired: 10.100.15.0/24
Oct 12 08:50:54 docker4 flanneld[25161]: I1012 08:50:54.430507 25161 network.go:98] Watching for new subnet leases
Oct 12 08:50:54 docker4 systemd[1]: Started Flannel Network Fabric.
user@docker4:/tmp$

You should see similar output in your log indicating that Flannel found a lease within the global scope allocation you configured in etcd. These leases are local to each host and I often refer to them as local scopes or networks. The next step is to complete this configuration on the remaining hosts. By checking the Flannel log on each host, I can tell what subnets were allocated to each host. In my case, this is what I ended up with:

  • docker1: 10.100.93.0/24
  • docker2: 10.100.58.0/24
  • docker3: 10.100.90.0/24
  • docker4: 10.100.15.0/24

At this point, Flannel is fully configured. In the next recipe, we'll discuss how to configure Docker to consume the Flannel network.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset