As we mentioned earlier, Docker comes with a set of sensible defaults to get your containers communicating on the network. From a network perspective, the Docker default is to attach any spawned container to the docker0
bridge. In this recipe, we'll show how to connect containers in the default bridge mode and explain how network traffic leaving and destined for the container is handled.
You'll need access to a Docker host and an understanding of how your Docker host is connected to the network. In our example, we'll be using a Docker host that has two physical network interfaces, like the one shown in the following diagram:
You'll want to make sure that you have access to view iptables
rules to verify netfilter policies. If you wish to download and run example containers, your Docker host will also need access to the Internet. In some cases, the changes we make may require you to have root-level access to the system.
After installing and starting Docker, you should notice the addition of a new Linux bridge named docker0
. By default, the docker0
bridge has an IP address of 172.17.0.1/16
:
user@docker1:~$ ip addr show docker0 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:54:87:8b:ea brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 scope global docker0 valid_lft forever preferred_lft forever user@docker1:~$
Docker will place any containers that are started without specifying a network on the docker0
bridge. Now, let's look at an example container running on this host:
user@docker1:~$ docker run -it jonlangemak/web_server_1 /bin/bash root@abe6eae2e0b3:/# ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff inet 172.17.0.2/16 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:acff:fe11:2/64 scope link valid_lft forever preferred_lft forever root@abe6eae2e0b3:/#
By running the container in interactive mode, we can examine what the container believes its network configuration to be. In this case, we can see that the container has a single non-loopback network adapter (eth0
) with an IP address of 172.17.0.2/16
.
In addition, we can see that the container believes its default gateway is the docker0
bridge interface on the Docker host:
root@abe6eae2e0b3:/# ip route default via 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2 root@abe6eae2e0b3:/#
By running some basic tests, we can see that the container has access to physical interface of the Docker host as well as Internet-based resources.
root@abe6eae2e0b3:/# ping 10.10.10.101 -c 2 PING 10.10.10.101 (10.10.10.101): 48 data bytes 56 bytes from 10.10.10.101: icmp_seq=0 ttl=64 time=0.084 ms 56 bytes from 10.10.10.101: icmp_seq=1 ttl=64 time=0.072 ms --- 10.10.10.101 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.072/0.078/0.084/0.000 ms root@abe6eae2e0b3:/# root@abe6eae2e0b3:/# ping 4.2.2.2 -c 2 PING 4.2.2.2 (4.2.2.2): 48 data bytes 56 bytes from 4.2.2.2: icmp_seq=0 ttl=50 time=29.388 ms 56 bytes from 4.2.2.2: icmp_seq=1 ttl=50 time=26.766 ms --- 4.2.2.2 ping statistics --- 2 packets transmitted, 2 packets received, 0% packet loss round-trip min/avg/max/stddev = 26.766/28.077/29.388/1.311 ms root@abe6eae2e0b3:/#
Given that the network the container lives on was created by Docker, we can safely assume that the rest of network is not aware of it. That is, the outside network has no knowledge of the 172.17.0.0/16
network since it's local to the Docker host. That being said, it seems curious that the container is able to reach resources that live beyond the docker0
bridge. Docker makes this work by hiding container's IP addresses behind the Docker host's IP interfaces. The traffic flow is shown in the following image:
Since the containers' traffic is seen on the physical network as the Docker host's IP address, other network resources know how to return the traffic to the container. To perform this outbound NAT, Docker uses the Linux netfilter framework. We can see these rules using the netfilter command-line tool iptables
:
user@docker1:~$ sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination DOCKER all -- anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 172.17.0.0/16 anywhere Chain DOCKER (2 references) target prot opt source destination RETURN all -- anywhere anywhere user@docker1:~$
As you can see, we have a rule in the POSTROUTING
chain that masquerades or hides, anything sourced from our docker0
bridge (172.17.0.0/16
) behind the host's interface.
Although outbound connectivity is configured and allowed by default, Docker does not by default provide a means to access services in the containers from outside the Docker host. In order to do this, we must pass Docker additional flags at container runtime. Specifically, we can pass the -P
flag when we run the container. To examine this behavior, let's look at a container image that exposes a port:
docker run --name web1 -d -P jonlangemak/web_server_1
This tells Docker to map a random port to any ports that the container image exposes. In the case of this demo container, the image exposes port 80
. After running the container, we can see the host port mapped to the container:
user@docker1:~$ docker run --name web1 -P -d jonlangemak/web_server_1 556dc8cefd79ed1d9957cc52827bb23b7d80c4b887ee173c2e3b8478340de948 user@docker1:~$ user@docker1:~$ docker port web1 80/tcp -> 0.0.0.0:32768 user@docker1:~$
As we can see, the containers port 80
has been mapped to host port 32768
. This means that we can access the service running on port 80
of the container through the host's interfaces at port 32768
. Much like the outbound container access, inbound connectivity also uses netfilter to create the port mapping. We can see this by checking the NAT and filter table:
user@docker1:~$ sudo iptables -t nat -L Chain PREROUTING (policy ACCEPT) target prot opt source destination DOCKER all -- anywhere anywhere ADDRTYPE match dst-type LOCAL Chain INPUT (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination DOCKER all -- anywhere !127.0.0.0/8 ADDRTYPE match dst-type LOCAL Chain POSTROUTING (policy ACCEPT) target prot opt source destination MASQUERADE all -- 172.17.0.0/16 anywhere MASQUERADE tcp -- 172.17.0.2 172.17.0.2 tcp dpt:http Chain DOCKER (2 references) target prot opt source destination RETURN all -- anywhere anywhere DNAT tcp -- anywhere anywhere tcp dpt:32768 to:172.17.0.2:80 user@docker1:~$ sudo iptables -t filter -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination DOCKER-ISOLATION all -- anywhere anywhere DOCKER all -- anywhere anywhere ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT all -- anywhere anywhere Chain OUTPUT (policy ACCEPT) target prot opt source destination Chain DOCKER (1 references) target prot opt source destination ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:http Chain DOCKER-ISOLATION (1 references) target prot opt source destination RETURN all -- anywhere anywhere user@docker1:~$
Since the connectivity is being exposed on all interfaces (0.0.0.0
), our inbound diagram will look like this:
If not defined otherwise containers that live on the same host, and hence the same docker0
bridge, can inherently communicate with each other by their assigned IP address on any port, which is bound to a service. Allowing this communication is the default behavior and can be changed as we'll see in a later chapters when we discuss Inter-Container Communication (ICC) configuration.
It should be noted that this is the default behavior for containers that are run without specifying any additional network parameters, that is, containers that use the Docker default bridge network. Later chapters will introduce other options that allow you to place containers living on the same host on different networks.
Communication between containers that live on different hosts requires using a combination of both the previously discussed flows. To test this out, let's expand our lab by adding a second host named docker2
. Let's assume container web2
on the host docker2
wishes to access the container web1
living on host docker1
, which is hosting a service on port 80
. The flow will look like this:
Let's walk through the flow at each step and show what the packets look like as they hit the wire in each step. In this case, the container web1
is exposing port 80
, which has been published to port 32771
on the host docker1
.
web2
destined for the exposed port (32771
) on the 10.10.10.101
interface of host docker1
:docker0
bridge (172.17.0.1
). The host does a route lookup and determines that the destination lives out of its 10.10.10.102
interface, so it hides the container's real source IP behind that interface's IP address:docker1
host and is examined by the netfilter rules. docker1
has a rule that exposes the service port of container 1 (80
) on port 32271
of the host:32771
to 80
and passed along to the web1
container, which receives the traffic on the correct port 80
:To try this out for ourselves, let's first run the web1
container and check what port the service is exposed on:
user@docker1:~/apache$ docker run --name web1 -P
-d jonlangemak/web_server_1
974e6eba1948ce5e4c9ada393b1196482d81f510de 12337868ad8ef65b8bf723
user@docker1:~/apache$
user@docker1:~/apache$ docker port web1
80/tcp -> 0.0.0.0:32771
user@docker1:~/apache$
Now let's run a second container called web2 on the host docker2 and attempt to access web1's service on port 32771…
user@docker2:~$ docker run --name web2 -it jonlangemak/web_server_2 /bin/bash root@a97fea6fb0c9:/# root@a97fea6fb0c9:/# curl http://10.10.10.101:32771 <body> <html> <h1><span style="color:#FF0000;font-size:72px;">Web Server #1 - Running on port 80</span> </h1> </body> </html>