Configuring NDP proxying

As we saw in the last recipe, one of the major differences with IPv6 support in Docker is the lack of the firewall integration. Without that integration, we lose things like outbound masquerade and full port publishing capabilities. And while this may not be necessary in all cases, there is a certain convenience factor that is lost when not using this. For instance, when running in IPv4 only mode, an administrator could install Docker and immediately connect your containers to the outside network. This is because the container was only ever seen through the Docker host's IP addresses for both inbound (published port) and outbound (masquerade) connectivity. This meant that there was no need to inform the outside network about additional subnets because the outside network only ever saw the Docker host's IP addresses. In the IPv6 model, the outside network has to know about the container subnets in order to route to them. In this chapter, we'll review how to configure NDP proxying as a workaround to this problem.

Getting ready

In this recipe, we'll be using this lab topology:

Getting ready

You'll need root-level access to each host to make network configuration changes. It is assumed that Docker is installed, and it's a default configuration.

How to do it…

The preceding topology shows that our hosts are dual stack connected to the network, but Docker has not yet been configured to use IPv6. Like we saw in the previous recipe, configuring Docker for IPv6 would also typically mean configuring routing on the outside network, so it knows how to reach the IPv6 CIDR you define for the docker0 bridge to use. However, assume for a moment that this isn't possible. Assume that you have no control over the outside network, which means you can't advertise or notify other network endpoints about any newly defined IPv6 subnet on your Docker host.

Let's also assume that while you can't advertise any newly defined IPv6 networks, you are however able to reserve additional IPv6 space within the existing networks. For instance, the hosts currently have interfaces defined within the 2003:ab11::/64 network. If we carve up this space, we can split it into four /66 networks:

  • 2003:ab11::/66
  • 2003:ab11:0:0:4000::/66
  • 2003:ab11:0:0:8000::/66
  • 2003:ab11:0:0:c000::/66

Let's assume for a second that we are allowed to reserve the last two subnets for our use. We can now enable IPv6 within Docker and allocate these two networks as the IPv6 CIDR ranges. Here are the configuration options for each Docker host:

  • docker1
    ExecStart=/usr/bin/dockerd --ipv6 --fixed-cidr-v6=2003:ab11:0:0:8000::/66
  • docker2
    ExecStart=/usr/bin/dockerd --ipv6 --fixed-cidr-v6=2003:ab11:0:0:c000::/66

After loading the new configuration into systemd and restarting the Docker service, our lab topology would now look like this:

How to do it…

Let's launch a container on both hosts:

user@docker1:~$ docker run -d --name=web1 jonlangemak/web_server_1
user@docker2:~$ docker run -d --name=web2 jonlangemak/web_server_2

Now determine the allocated IPv6 address of the web1 container:

user@docker1:~$ docker exec web1 ip -6 addr show dev eth0
4: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
    inet6 2003:ab11::8000:242:ac11:2/66 scope global nodad
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever
user@docker1:~$

Now, let's try and reach that container from the web2 container:

user@docker2:~$ docker exec -it web2 ping6 
2003:ab11::8000:242:ac11:2  -c 2
PING 2003:ab11::8000:242:ac11:2 (2003:ab11::8000:242:ac11:2): 48 data bytes
56 bytes from 2003:ab11::c000:0:0:1: Destination unreachable: Address unreachable
56 bytes from 2003:ab11::c000:0:0:1: Destination unreachable: Address unreachable
--- 2003:ab11::8000:242:ac11:2 ping statistics ---
2 packets transmitted, 0 packets received, 100% packet loss
user@docker2:~$

This fails because the Docker hosts believe that the destination address is directly connected to their eth0 interface. When the web2 container attempts the connection, the following actions occur:

  • The container does a route lookup and determines that the address 2003:ab11::8000:242:ac11:2 does not fall within its local subnet of 2003:ab11:0:0:c000::1/66, so it forwards the traffic to its default gateway (the docker0 bridge interface)
  • The host receives the traffic and does a route lookup and determines that the destination address of 2003:ab11::8000:242:ac11:2 falls within its local subnet of 2003:ab11::/64 (eth0) and uses NDP to try and find the host with that destination IP address
  • The host receives no response to this query and the flow fails

We can verify that this is what's happening by checking the docker2 host's IPv6 neighbor table:

user@docker2:~$ ip -6 neighbor show
fe80::20c:29ff:fe50:b8cc dev eth0 lladdr 00:0c:29:50:b8:cc STALE
2003:ab11::c000:242:ac11:2 dev docker0 lladdr 02:42:ac:11:00:02 REACHABLE
2003:ab11::8000:242:ac11:2 dev eth0  FAILED
fe80::42:acff:fe11:2 dev docker0 lladdr 02:42:ac:11:00:02 REACHABLE
user@docker2:~$

Following the normal routing logic, everything is working the way it should. However IPv6 has a feature called NDP proxy, which can help solve this problem. Those of you familiar with proxy ARP in IPv4 will find NDP proxy to provide similar functionality. Essentially, NDP proxy allows a host to answer neighbor requests on behalf of another endpoint. In our case, we can tell both Docker hosts to answer on behalf of the containers. To do this, we need to first enable NDP proxy on the host itself. This is done by enabling the kernel parameter net.ipv6.conf.eth0.proxy_ndp, as shown in the following code:

user@docker1:~$ sudo sysctl net.ipv6.conf.eth0.proxy_ndp=1
net.ipv6.conf.eth0.proxy_ndp = 1
user@docker1:~$
user@docker2:~$ sudo sysctl net.ipv6.conf.eth0.proxy_ndp=1
net.ipv6.conf.eth0.proxy_ndp = 1
user@docker2:~$

Note

Keep in mind that these settings won't persist through a reboot when defined in this manner.

Once that is enabled, we need to manually tell each host what IPv6 address to answer for. We do that by adding proxy entries to each host's neighbor table. In the preceding example, we need to do that for both the source and the destination container in order to allow for bidirectional traffic flow. First, add the entry on the host docker1 for the destination:

user@docker1:~$ sudo ip -6 neigh add proxy 
2003:ab11::8000:242:ac11:2 dev eth0

Then, determine the IPv6 address of the web2 container, which will act as the source of the traffic and add a proxy entry for that on the host docker2:

user@docker2:~$ docker exec web2 ip -6 addr show dev eth0
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500
    inet6 2003:ab11::c000:242:ac11:2/66 scope global nodad
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever
user@docker2:~$
user@docker2:~$ sudo ip -6 neigh add proxy 
2003:ab11::c000:242:ac11:2 dev eth0

This will tell each Docker host to reply to the neighbor solicitation requests on behalf of the containers. Ping tests should now work as expected:

user@docker2:~$ docker exec -it web2 ping6 
2003:ab11::8000:242:ac11:2 -c 2
PING 2003:ab11::8000:242:ac11:2 (2003:ab11::8000:242:ac11:2): 48 data bytes
56 bytes from 2003:ab11::8000:242:ac11:2: icmp_seq=0 ttl=62 time=0.462 ms
56 bytes from 2003:ab11::8000:242:ac11:2: icmp_seq=1 ttl=62 time=0.660 ms
--- 2003:ab11::8000:242:ac11:2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.462/0.561/0.660/0.099 ms
user@docker2:~$

And we should see the relevant neighbor entry on each host:

user@docker1:~$ ip -6 neighbor show
fe80::20c:29ff:fe7f:3d64 dev eth0 lladdr 00:0c:29:7f:3d:64 router REACHABLE
2003:ab11::8000:242:ac11:2 dev docker0 lladdr 02:42:ac:11:00:02 REACHABLE
fe80::42:acff:fe11:2 dev docker0 lladdr 02:42:ac:11:00:02 DELAY
2003:ab11::c000:242:ac11:2 dev eth0 lladdr 00:0c:29:7f:3d:64 REACHABLE
user@docker1:~$
user@docker2:~$ ip -6 neighbor show
fe80::42:acff:fe11:2 dev docker0 lladdr 02:42:ac:11:00:02 REACHABLE
2003:ab11::c000:242:ac11:2 dev docker0 lladdr 02:42:ac:11:00:02 REACHABLE
fe80::20c:29ff:fe50:b8cc dev eth0 lladdr 00:0c:29:50:b8:cc router REACHABLE
2003:ab11::8000:242:ac11:2 dev eth0 lladdr 00:0c:29:50:b8:cc REACHABLE
user@docker2:~$

Much like proxy ARP, NDP proxy works by the host providing its own MAC address in response to the neighbor discovery request. We can see that in both cases, the MAC address in the neighbor table is actually each host's eth0 MAC address:

user@docker1:~$ ip link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:50:b8:cc brd ff:ff:ff:ff:ff:ff
user@docker1:~$
user@docker2:~$ ip link show dev eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:7f:3d:64 brd ff:ff:ff:ff:ff:ff
user@docker2:~$

This approach works fairly well in cases where you can't advertise your Docker IPv6 subnet to the outside network. However, it relies on individual proxy entries for each IPv6 address you wish to proxy. For each container spawned you would need to generate an additional IPv6 proxy address.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset