When I began writing this book, the current version of Docker was 1.10 and at that time MacVLAN functionality was included in the release candidate version of Docker. Since then, version 1.12 has been released, which pushed MacVLAN into the release version of the software. That being said, the only requirement to use the MacVLAN driver is to ensure that you have a 1.12 or newer version of Docker installed. In this chapter, we'll review how to consume the MacVLAN network driver for containers provisioned from Docker.
In this recipe, we'll be using two Linux hosts running Docker. Our lab topology will consist of two Docker hosts that live on the same network. It will look like this:
It is assumed that each host is running a version of Docker that is 1.12 or greater in order to have access to the MacVLAN driver. The hosts should have a single IP interface and Docker should be in its default configuration. In some cases, the changes we make may require you to have root-level access to the system.
Much like all of the other user-defined network types, the MacVLAN driver is handled through the docker network
subcommand. Creating a MacVLAN type network is just as easy as creating any other network type, but there are a few things to keep in mind that are specific to this driver.
--internal flag
is available when creating networks with the MacVLAN driver. When specified the parent interface is defined as a dummy interface, which prevents traffic from leaving the host.Taking these points into consideration with our current lab topology, we can define the network as follows on each host:
user@docker1:~$ docker network create -d macvlan --subnet 10.10.10.0/24 --ip-range 10.10.10.0/25 --gateway=10.10.10.1 --aux-address docker1=10.10.10.101 --aux-address docker2=10.10.10.102 -o parent=eth0 macvlan_net user@docker2:~$ docker network create -d macvlan --subnet 10.10.10.0/24 --ip-range 10.10.10.128/25 --gateway=10.10.10.1 --aux-address docker1=10.10.10.101 --aux-address docker2=10.10.10.102 -o parent=eth0 macvlan_net
With this configuration, each host on the network will use one half of the available defined subnet, which in this case is a /25
. Since Dockers IPAM automatically reserves the gateway IP address for us, there's no need to prevent it from being allocated by defining it as an auxiliary address. However, since the Docker hosts interfaces themselves do live within this range, we do need to reserve those with auxiliary addresses.
We can now define containers on each host and verify that they can communicate with each other:
user@docker1:~$ docker run -d --name=web1 --net=macvlan_net jonlangemak/web_server_1 user@docker1:~$ docker exec web1 ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 7: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 02:42:0a:0a:0a:02 brd ff:ff:ff:ff:ff:ff inet 10.10.10.2/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:aff:fe0a:a02/64 scope link valid_lft forever preferred_lft forever user@docker1:~$ user@docker2:~$ docker run -d --name=web2 --net=macvlan_net jonlangemak/web_server_2 user@docker2:~$ docker exec web2 ip addr 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 4: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 02:42:0a:0a:0a:80 brd ff:ff:ff:ff:ff:ff inet 10.10.10.128/24 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::42:aff:fe0a:a80/64 scope link valid_lft forever preferred_lft forever user@docker2:~$
You'll note that there isn't a need to publish ports when the containers are run. Since the container has a uniquely routable IP address at this point, port publishing is not required. Any container can offer any service on its own unique IP address.
Much like other network types, Docker creates a network namespace for each container, which it then maps the containers MacVLAN interface into. Our topology at this point looks like this:
From an external testing host that lives off subnet, we can verify that each containers services are reachable via the containers IP address:
user@test_server:~$ curl http://10.10.10.2 <body> <html> <h1><span style="color:#FF0000;font-size:72px;">Web Server #1 - Running on port 80</span> </h1> </body> </html> user@test_server:~$ curl http://10.10.10.128 <body> <html> <h1><span style="color:#FF0000;font-size:72px;">Web Server #2 - Running on port 80</span> </h1> </body> </html> [root@tools ~]#
However, you will note that containers attached to MacVLAN networks are not accessible from the local host despite being on the same interface:
user@docker1:~$ ping 10.10.10.2 PING 10.10.10.2 (10.10.10.2) 56(84) bytes of data. From 10.10.10.101 icmp_seq=1 Destination Host Unreachable --- 10.10.10.2 ping statistics --- 5 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms user@docker1:~$
The current implementation of Docker support MacVLAN only in the MacVLAN bridge mode. We can verify that this is the operating mode of the MacVLAN interfaces by checking the details of the interface within the container:
user@docker1:~$ docker exec web1 ip -d link show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 5: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 02:42:0a:0a:0a:02 brd ff:ff:ff:ff:ff:ff macvlan mode bridge user@docker1:~$