In earlier chapters, we were exposed to the concept of ICC mode, but didn't have much information on the mechanics of how it worked. ICC is a Docker-native way of isolating all containers connected to the same network. The isolation provided prevents containers from talking directly to each other while still allowing their exposed ports to be published as well as allowing outbound connectivity. In this recipe, we'll review our options for ICC-based configuration in both the default docker0
bridge context as well as with user-defined networks.
We'll be using two Docker hosts in this recipe to demonstrate how ICC works in different network configurations. It is assumed that both Docker hosts used in this lab are in their default configuration. In some cases, the changes we make may require you to have root-level access to the system.
ICC mode can be configured on both the native docker0
bridge as well as any user-defined networks that utilize the bridge driver. In this recipe, we'll review how to configure ICC mode on the docker0
bridge. As we've seen in earlier chapters, settings related to the docker0
bridge need to be made at the service level. This is because the docker0
bridge is created as part of service initialization. This also means that, to make changes to it, we'll need to edit the Docker service configuration and then restart the service for them to take effect. Before we make any changes, let's take the opportunity to review the default ICC configuration. To do this, let's first view the docker0
bridge configuration:
user@docker1:~$ docker network inspect bridge
[
{
"Name": "bridge",
"Id": "d88fa0a96585792f98023881978abaa8c5d05e4e2bbd7b4b44a6e7b0ed7d346b",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
user@docker1:~$
As we can see, the docker0
bridge is configured for ICC mode on (true
). This means that Docker will not interfere or prevent containers connected to this bridge to talk directly to one another. To prove this out, let's start two containers:
user@docker1:~$ docker run -d --name=web1 jonlangemak/web_server_1 417dd2587dfe3e664b67a46a87f90714546bec9c4e35861476d5e4fa77e77e61 user@docker1:~$ docker run -d --name=web2 jonlangemak/web_server_2 a54db26074c00e6771d0676bb8093b1a22eb95a435049916becd425ea9587014 user@docker1:~$
Notice that we didn't specify the -P
flag, which tells Docker to not publish any of the containers exposed ports. Now, let's get each container's IP address, so we can validate connectivity:
user@docker1:~$ docker exec web1 ip addr show dev eth0 | grep inet inet 172.17.0.2/16 scope global eth0 inet6 fe80::42:acff:fe11:2/64 scope link user@docker1:~$ docker exec web2 ip addr show dev eth0 | grep inet inet 172.17.0.3/16 scope global eth0 inet6 fe80::42:acff:fe11:3/64 scope link user@docker1:~$
Now that we know the IP addresses, we can verify that each container can access the other on any service in which the container is listening:
user@docker1:~$ docker exec -it web1 ping 172.17.0.3 -c 2 PING 172.17.0.3 (172.17.0.3): 48 data bytes 56 bytes from 172.17.0.3: icmp_seq=0 ttl=64 time=0.198 ms 56 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.082 ms --- 172.17.0.3 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.082/0.140/0.198/0.058 ms user@docker1:~$ user@docker1:~$ docker exec web2 curl -s http://172.17.0.2 <body> <html> <h1><span style="color:#FF0000;font-size:72px;">Web Server #1 - Running on port 80</span> </h1> </body> </html> user@docker1:~$
Based on these tests, we can assume that the containers are allowed to talk to each other on any protocol that is listening. This is the expected behavior when ICC mode is enabled. Now, let's change the service level setting and recheck our configuration. To do this, set the following configuration in your systemd drop in file for the Docker service:
ExecStart=/usr/bin/dockerd --icc=false
Now reload the systemd configuration, restart the Docker service, and check the ICC setting:
user@docker1:~$ sudo systemctl daemon-reload
user@docker1:~$ sudo systemctl restart docker
user@docker1:~$ docker network inspect bridge
…<Additional output removed for brevity>…
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "false",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
…<Additional output removed for brevity>…
user@docker1:~$
Now that we've confirmed that ICC is disabled, let's start up our two containers once again and run the same connectivity tests:
user@docker1:~$ docker start web1 web1 user@docker1:~$ docker start web2 web2 user@docker1:~$ user@docker1:~$ docker exec -it web1 ping 172.17.0.3 -c 2 PING 172.17.0.3 (172.17.0.3): 48 data bytes user@docker1:~$ docker exec -it web2 curl -m 1 http://172.17.0.2 curl: (28) connect() timed out! user@docker1:~$
As you can see, we have no connectivity between the two containers. However, the Docker host itself is still able to access the services:
user@docker1:~$ curl http://172.17.0.2 <body> <html> <h1><span style="color:#FF0000;font-size:72px;">Web Server #1 - Running on port 80</span> </h1> </body> </html> user@docker1:~$ curl http://172.17.0.3 <body> <html> <h1><span style="color:#FF0000;font-size:72px;">Web Server #2 - Running on port 80</span> </h1> </body> </html> user@docker1:~$
We can inspect the netfilter rules that are used to implement ICC by looking at the iptables
rules FORWARD
chain of the filter table:
user@docker1:~$ sudo iptables -S FORWARD
-P FORWARD ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j DROP
user@docker1:~$
The preceding bolded rule is what prevents container–to-container communication on the docker0
bridge. If we had inspected this iptables
chain before disabling ICC, we would have seen this rule set to ACCEPT
as shown following:
user@docker1:~$ sudo iptables -S FORWARD
-P FORWARD ACCEPT
-A FORWARD -j DOCKER-ISOLATION
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
user@docker1:~$
As we saw earlier, linking containers allowed you to bypass this rule and allow a source container to access a target container. If we remove the two containers we can restart them with a link as follows:
user@docker1:~$ docker run -d --name=web1 jonlangemak/web_server_1 9846614b3bac6a2255e135d19f20162022a40d95bd62a0264ef4aaa89e24592f user@docker1:~$ docker run -d --name=web2 --link=web1 jonlangemak/web_server_2 b343b570189a0445215ad5406e9a2746975da39a1f1d47beba4d20f14d687d83 user@docker1:~$
Now if we examine the rules with iptables
, we can see two new rules added to the filter table:
user@docker1:~$ sudo iptables -S -P INPUT ACCEPT -P FORWARD ACCEPT -P OUTPUT ACCEPT -N DOCKER -N DOCKER-ISOLATION -A FORWARD -j DOCKER-ISOLATION -A FORWARD -o docker0 -j DOCKER -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j DROP -A DOCKER -s 172.17.0.3/32 -d 172.17.0.2/32 -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT -A DOCKER -s 172.17.0.2/32 -d 172.17.0.3/32 -i docker0 -o docker0 -p tcp -m tcp --sport 80 -j ACCEPT -A DOCKER-ISOLATION -j RETURN user@docker1:~$
These two new rules allow web2
to access web1
on any exposed port. Notice how the first rule defines the access from web2
(172.17.0.3
) to web1
(172.17.0.2
) with a destination port of 80
. The second rule flips the IPs and specifies port 80
as the source port, allowing the traffic to return to web2
.