Working with IPv6-enabled containers

In the previous recipe, we saw how Docker handles the basic allocation of IPv6-enabled containers. The behavior we've seen up to this point has closely mimicked what we saw in earlier chapters when only dealing with IPv4 addressed containers. However, this is not the case for all of the network functionality. Docker does not currently have complete feature parity between IPv4 and IPv6. Namely, as we'll see in this recipe, Docker does not have iptables (ip6tables) integration for IPv6 enabled containers. In this chapter, we'll review some of the network features that we previously visited with IPv4 only enabled containers and see how they act when using IPv6 addressing.

Getting ready

In this recipe, we'll be building off of the lab we built in the previous recipe. You'll need root-level access to each host to make network configuration changes. It is assumed that Docker is installed, and it's a default configuration.

How to do it…

As mentioned, Docker does not currently have host firewall, specifically netfilter or iptables, integration for IPv6. This means that several of the features we relied on previously with IPv4 behave differently when dealing with a containers IPv6 address. Let's start with some of the basic functionality. In the previous recipe, we saw that two containers on the same host, connected to the docker0 bridge, could talk directly with one another.

This behavior was expected and works in much the same manner when using IPv4 addresses. If we wanted to prevent this communication, we might look to disable Inter-Container Communication (ICC) in the Docker service. Let's update our Docker options on the host docker1 to set ICC to false:

ExecStart=/usr/bin/dockerd --icc=false --ipv6 --fixed-cidr-v6=2003:cd11::/64

Then, we can reload the systemd configuration, restart the Docker service, and restart the containers:

user@docker1:~$ docker start web1
web1
user@docker1:~$ docker start web2
web2
user@docker1:~$ docker exec web2 ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:03
          inet addr:172.17.0.3  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:3/64 Scope:Link
          inet6 addr: 2003:cd11::242:ac11:3/64 Scope:Global
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1128 (1.1 KB)  TX bytes:648 (648.0 B)

user@docker1:~$
user@docker1:~$ docker exec -it web1 curl http://172.17.0.3
curl: (7) couldn't connect to host
user@docker1:~$ docker exec -it web1 curl -g 
http://[2003:cd11::242:ac11:3]
<body>
  <html>
    <h1><span style="color:#FF0000;font-size:72px;">Web Server #2 - Running on port 80</span>
    </h1>
</body>
  </html>
user@docker1:~$

As we can see, the attempt on IPv4 fails and the subsequent IPv6 attempt works. Since Docker is not managing any firewall rules related to the containers IPv6 address, there is nothing to prevent direct connectivity between IPv6 addresses.

Since Docker isn't managing IPv6-related firewall rules, you might also assume that features like outbound masquerade and port publishing no longer work as well. And while this is true in the sense that Docker is not creating IPv6 associated NAT rules and firewall policy, it does not mean that a containers IPv6 address is not reachable from the outside network. Let's walk through an example to show you what I mean. Let's start a container on the second Docker host:

user@docker2:~$ docker run -dP --name=web2 jonlangemak/web_server_2
5e2910c002db3f21aa75439db18e5823081788e69d1e507c766a0c0233f6fa63
user@docker2:~$
user@docker2:~$ docker port web2
80/tcp -> 0.0.0.0:32769
user@docker2:~$

Note that when we ran the container on the host docker2 that we passed the -P flag to tell Docker to publish the exposed ports of the container. If we check the port mapping, we can see that the host has chosen port 32768. Note that the port mapping indicates an IP address of 0.0.0.0, which typically indicates any IPv4 address. Let's perform some quick tests from the other Docker host to validate what is and isn't working:

user@docker1:~$ curl http://10.10.10.102:32769
<body>
  <html>
    <h1><span style="color:#FF0000;font-size:72px;">Web Server #2 - Running on port 80</span>
    </h1>
</body>
  </html>
user@docker1:~$

As expected, the IPv4 port mapping works. We're able to access the containers service through the Docker hosts IPv4 address by leveraging the iptables NAT rule to map port 32769 to the actual service port of 80. Let's now try the same example but using the hosts IPv6 address:

user@docker1:~$ curl -g http://[2003:ab11::2]:32769
<body>
  <html>
    <h1><span style="color:#FF0000;font-size:72px;">Web Server #2 - Running on port 80</span>
    </h1>
</body>
  </html>
user@docker1:~$

Surprisingly, this also works. You might be wondering how this is working considering that Docker doesn't manage or integrate with any of the hosts IPv6 firewall policy. The answer is actually quite simple. If we look at the second Docker hosts open ports, we'll see that there is a docker-proxy service bound to port 32769:

user@docker2:~$ sudo netstat -plnt
…<output removed for brevity>…
Active Internet connections (only servers)
Local Address   Foreign Address         State       PID/Program name
0.0.0.0:22      0.0.0.0:*               LISTEN      1387/sshd
127.0.0.1:6010  0.0.0.0:*               LISTEN      3658/0
:::22           :::*                    LISTEN      1387/sshd
::1:6010        :::*                    LISTEN      3658/0
:::32769        :::*                    LISTEN      2390/docker-proxy
user@docker2:~$

As we saw in earlier chapters, the docker-proxy service facilitates inter container and published port connectivity. In order for this to work, the docker-proxy service has to bind to the port in which the container publishes. Recall that services listening on all IPv4 interfaces use the syntax of 0.0.0.0 to represent all IPv4 interfaces. In a similar fashion, IPv6 interfaces use the syntax of ::: to indicate the same thing. You'll note that the docker-proxy port references all IPv6 interfaces. Although this may differ based on your operating system, binding to all IPv6 interfaces also implies binding to all IPv4 interfaces. That is, the preceding docker-proxy service is actually listening on all of the hosts IPv4 and IPv6 interfaces.

Note

Keep in mind that docker-proxy is not typically used for inbound services. Those rely on the iptables NAT rules to map the published port to the container. However, in the case that those rules don't exist, the host is still listening on all of its interfaces for traffic to port 32769.

The net result of this is that despite not having an IPv6 NAT rule, I'm still able to access the containers service through the Docker hosts interfaces. In this manner, published ports with IPv6 still work. However, this only works when using the docker-proxy. That mode of operation, while still the default, is intended to be removed in favor of hairpin NAT. We can enable hairpin NAT on the Docker host by passing the --userland-proxy=false parameter to Docker as a service-level option. Doing so would prevent this means of IPv6 port publishing from working.

Finally, the lack of firewall integration also means that we no longer have support for the outbound masquerade feature. In IPv4, this feature allowed containers to talk to the outside network without having to worry about routing or IP address overlapping. Container traffic leaving the host was always hidden behind one of the hosts IP interfaces. However, this was not a mandated configuration. As we saw in earlier chapters, you could very easily disable the outbound masquerade feature and provision the docker0 bridge with a routable IP address and subnet. So long as the outside, or external, network knew how to reach that subnet, a container could very easily have a unique routable IP address.

One of the reasons IPv6 came to be was because of the rapid depletion of IPv4 addresses. NAT in IPv4 served as a largely successful, although equally troublesome, temporary stop gap to the address depletion problem. This means that many believe that we shouldn't be implementing any sort of NAT in regard to IPv6 whatsoever. Rather, all IPv6 prefixes should be natively routable and reachable without the obfuscation of an IP translation. Lacking IPv6 firewall integration, natively routing IPv6 traffic to each host is the current means in which Docker can facilitate reachability across multiple Docker hosts and the outside network. This requires that each Docker host uses a unique IPv6 CIDR range and that the Docker hosts know how to reach all of the other Docker hosts defined CIDR range. While this typically requires the physical network to have network reachability information, in our simple lab example each host just requires a static route to the other hosts CIDR. Much like we did in the first recipe, we'll add an IPv6 route on each host so both know how to reach the IPv6 subnet of the other docker0 bridge:

user@docker1:~$ sudo ip -6 route add 2003:ef11::/64 via 2003:ab11::2
user@docker2:~$ sudo ip -6 route add 2003:cd11::/64 via 2003:ab11::1

After adding the routes, each Docker host knows how to get to the other host's IPv6 docker0 bridge subnet:

How to do it…

If we now check, we should have reachability between containers on each host:

user@docker2:~$ docker exec web2 ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 02:42:ac:11:00:02
          inet addr:172.17.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::42:acff:fe11:2/64 Scope:Link
          inet6 addr: 2003:ef11::242:ac11:2/64 Scope:Global
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:43 errors:0 dropped:0 overruns:0 frame:0
          TX packets:34 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:3514 (3.5 KB)  TX bytes:4155 (4.1 KB)

user@docker2:~$
user@docker1:~$ docker exec -it web1 curl -g http://[2003:ef11::242:ac11:2]
<body>
  <html>
    <h1><span style="color:#FF0000;font-size:72px;">Web Server #2 - Running on port 80</span>
    </h1>
</body>
  </html>
user@docker1:~$

As we can see, the container on the host docker1 was able to successfully route directly to the container running on the host docker2. So as long as each Docker host has the appropriate routing information, containers will be able to route directly to one another.

The downside of this approach is that the container is now a fully exposed network endpoint. We no longer get the advantage of exposing only certain ports to the outside network through Docker published ports. If you want to ensure that only certain ports are exposed on your IPv6 interfaces the userland proxy may be your best option at this point. Keep these options in mind when designing services around IPv6 connectivity.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset