OVS and Docker together

The recipes until this point have shown several possibilities for what's possible when manually configuring Docker networks. Although these are all possible solutions, they all require a fair amount of manual intervention and configuration and are not easily consumable in their current form. If we use the previous recipe as an example, there are a few notable drawbacks:

  • You are responsible for keeping track of IP allocations on the containers increasing your risk of assigning conflicting IP addresses to different containers
  • There is no dynamic port mapping or inherent outbound masquerading to facilitate communication between a container and the rest of the network
  • While we used Pipework to lessen the configuration burden, there was still a fair amount of manual configuration that needed to be done to connect a container to the OVS bridge
  • The majority of the configuration would not persist through a host reboot by default

This being said, using what we've learned so far, there is a different way that we can leverage the GRE capability of OVS while still using Docker to manage container networking. In this recipe, we'll review that solution as well as describe how to make it a more persistent solution that will survive a host reboot.

Note

Again, this recipe is for the purpose of example only. This behavior is already supported by Docker's user-defined overlay network type. If for some reason, you need to use GRE rather than VXLAN, this might be a suitable alternative. As always, make sure that you use any Docker native networking constructs before you start customizing your own. It will save you a lot of headache down the road!

Getting ready

In this recipe, we'll be demonstrating the configuration on two Docker hosts. The hosts need to be able to talk to each other across the network. It is assumed that hosts have Docker installed and that Docker is in its default configuration. In order to view and manipulate networking settings, you'll want to ensure that you have the iproute2 toolset installed. If not present on the system, it can be installed using the command:

sudo apt-get install iproute2 

In order to make network changes to the host, you'll also need root-level access.

How to do it…

Using the previous recipe for inspiration, our new topology will look similar, but will have one significant difference:

How to do it…

You'll notice that each host now has a Linux bridge named newbridge. We're going to tell Docker to use this bridge rather than the docker0 bridge for default container connectivity. This means that we're only using OVS for its GRE capabilities turning it into a slave to newbridge. Using a Linux bridge for container connectivity means that Docker is able to do IPAM for us as well as handle inbound and outbound netfilter rules. Using a bridge other than docker0 has more to do with configuration than usability, as we'll see shortly.

We're going to once again start the configuration from scratch assuming that each host only has Docker installed in its default configuration. The first thing we want to do is to configure the two bridges we'll be using on each host. We'll start with the host docker1:

user@docker1:~$ sudo apt-get install openvswitch-switch
…<Additional output removed for brevity>…
Setting up openvswitch-switch (2.0.2-0ubuntu0.14.04.3) ...
openvswitch-switch start/running
user@docker1:~$
user@docker1:~$ sudo ovs-vsctl add-br ovs_bridge
user@docker1:~$ sudo ip link set dev ovs_bridge up
user@docker1:~$
user@docker1:~$ sudo ip link add newbridge type bridge
user@docker1:~$ sudo ip link set newbridge up
user@docker1:~$ sudo ip address add 10.11.12.1/24 dev newbridge
user@docker1:~$ sudo ip link set newbridge up

At this point, we have both an OVS bridge as well as a standard Linux bridge configured on the host. To finish up the bridge configuration, we need to create the GRE interface on the OVS bridge and then bind the OVS bridge to the Linux bridge:

user@docker1:~$ sudo ovs-vsctl add-port ovs_bridge ovs_gre 
-- set interface ovs_gre type=gre options:remote_ip=192.168.50.101
user@docker1:~$
user@docker1:~$ sudo ip link set ovs_bridge master newbridge

Now that the bridge configuration is complete, we can tell Docker to use newbridge as its default bridge. We do that by editing the systemd drop-in file and adding the following options:

ExecStart=/usr/bin/dockerd --bridge=newbridge --fixed-cidr=10.11.12.128/26

Notice that, in addition to telling Docker to use a different bridge, I'm also telling Docker to only allocate container IP addressing from 10.11.12.128/26. When we configure the second Docker host (docker3), we'll tell Docker to only assign container IP addressing from 10.11.12.192/26. This is a hack, but it prevents the two Docker hosts from issues overlapping IP addresses without having to be aware of what the other host has already allocated.

Note

Chapter 3, User-Defined Networks, demonstrated that the native overlay network gets around this by tracking IP allocations between all hosts that participate in the overlay network.

To make Docker use the new options, we need to reload the system configuration and restart the Docker service:

user@docker1:~$ sudo systemctl daemon-reload
user@docker1:~$ sudo systemctl restart docker

And finally, start a container without specifying a network mode:

user@docker1:~$ docker run --name web1 -d -P jonlangemak/web_server_1
82c75625f8e5436164e40cf4c453ed787eab102d3d12cf23c86d46be48673f66
user@docker1:~$
user@docker1:~$ docker exec web1 ip addr
…<Additional output removed for brevity>…
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:0a:0b:0c:80 brd ff:ff:ff:ff:ff:ff
    inet 10.11.12.128/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe0b:c80/64 scope link
       valid_lft forever preferred_lft forever
user@docker1:~$

As expected, the first container we ran gets the first available IP address in the 10.11.12.128/26 network. Now, let's move on to configuring the second host docker3:

user@docker3:~$ sudo apt-get install openvswitch-switch
…<Additional output removed for brevity>…
Setting up openvswitch-switch (2.0.2-0ubuntu0.14.04.3) ...
openvswitch-switch start/running
user@docker3:~$
user@docker3:~$ sudo ovs-vsctl add-br ovs_bridge
user@docker3:~$ sudo ip link set dev ovs_bridge up
user@docker3:~$
user@docker3:~$ sudo ip link add newbridge type bridge
user@docker3:~$ sudo ip link set newbridge up
user@docker3:~$ sudo ip address add 10.11.12.2/24 dev newbridge
user@docker3:~$ sudo ip link set newbridge up
user@docker3:~$
user@docker3:~$ sudo ip link set ovs_bridge master newbridge
user@docker3:~$ sudo ovs-vsctl add-port ovs_bridge ovs_gre 
-- set interface ovs_gre type=gre options:remote_ip=10.10.10.101
user@docker3:~$

On this host, tell Docker to use the following options by editing the systemd drop-in file:

ExecStart=/usr/bin/dockerd --bridge=newbridge --fixed-cidr=10.11.12.192/26

Reload the system configuration and restart the Docker service:

user@docker3:~$ sudo systemctl daemon-reload
user@docker3:~$ sudo systemctl restart docker

Now spin up a container on this host:

user@docker3:~$ docker run --name web2 -d -P jonlangemak/web_server_2
eb2b26ee95580a42568051505d4706556f6c230240a9c6108ddb29b6faed9949
user@docker3:~$
user@docker3:~$ docker exec web2 ip addr
…<Additional output removed for brevity>…
9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
    link/ether 02:42:0a:0b:0c:c0 brd ff:ff:ff:ff:ff:ff
    inet 10.11.12.192/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe0b:cc0/64 scope link
       valid_lft forever preferred_lft forever
user@docker3:~$

At this point, each container should be able to talk to the other across the GRE tunnel:

user@docker3:~$ docker exec -it web2 curl http://10.11.12.128
<body>
  <html>
    <h1><span style="color:#FF0000;font-size:72px;">Web Server #1 - Running on port 80</span>
    </h1>
</body>
  </html>
user@docker3:~$

In addition, each host still has all the benefits Docker provides through IPAM, publishing ports, and container masquerading for outbound access.

We can verify port publication:

user@docker1:~$ docker port web1
80/tcp -> 0.0.0.0:32768
user@docker1:~$ curl http://localhost:32768
<body>
  <html>
    <h1><span style="color:#FF0000;font-size:72px;">Web Server #1 - Running on port 80</span>
    </h1>
</body>
  </html>
user@docker1:~$

And we can verify outbound access through the default Docker masquerade rule:

user@docker1:~$ docker exec -it web1 ping 4.2.2.2 -c 2
PING 4.2.2.2 (4.2.2.2): 48 data bytes
56 bytes from 4.2.2.2: icmp_seq=0 ttl=50 time=30.797 ms
56 bytes from 4.2.2.2: icmp_seq=1 ttl=50 time=31.399 ms
--- 4.2.2.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 30.797/31.098/31.399/0.301 ms
user@docker1:~$

The last advantage to this setup is that we can easily make it persist through host reboots. The only configuration that will need to be recreated will be the configuration for the Linux bridge newbridge and the connection between newbridge and the OVS bridge. To make this persistent, we can add the following configuration in each host's network configuration file (/etc/network/interfaces).

Note

Ubuntu will not process bridge-related configuration in the interface's file unless you have the bridge utilities package installed on the host.

sudo apt-get install bridge-utils
  • Host docker1:
    auto newbridge
    iface newbridge inet static
      address 10.11.12.1
      netmask 255.255.255.0
      bridge_ports ovs_bridge
  • Host docker3:
    auto newbridge
    iface newbridge inet static
      address 10.11.12.2
      netmask 255.255.255.0
      bridge_ports ovs_bridge

By putting the newbridge configuration information into the network start script, we accomplish two tasks. First, we create the bridge that Docker is expecting to use before the actual Docker service starts. Without this, the Docker service would fail to start because it couldn't find the bridge. Second, this configuration allows us to bind the OVS to newbridge at the same time that the bridge is created by specifying the bridge's bridge_ports. Because this configuration was done manually before through the ip link command, the binding would not persist through a system reboot.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset