Tagging VLAN IDs with MacVLAN and IPVLAN networks

One feature that's available with both MacVLAN and IPVLAN Docker network types is the ability to tag containers on a particular VLAN. This is possible since both network types leverage a parent interface. In this recipe, we'll show you how you can create Docker network types that are VLAN tagged or VLAN aware. Since this functionality works the same in the case of either network type, we'll focus on configuring this with MacVLAN type networks.

Getting ready

In this recipe, we'll be a single Docker host to demonstrate how the Linux host can send VLAN tagged frames to upstream network devices. Our lab topology will be as follows:

Getting ready

It is assumed that this host is running version 1.12. The host has two network interfaces, eth0 with an IP address of 10.10.10.101 and eth1 that is up, but has no IP address configured on it.

How to do it…

One of the interesting features comes along with MacVLAN and IPVLAN network drivers is the ability to provision subinterfaces. A subinterface is a logical partition of what's typically a physical interface. The standard way of partitioning a physical interface is to leverage VLANs. You'll commonly hear this referred to as dot1q trunking or VLAN tagging. To do this, the upstream network interface has to be prepared to receive tagged frames and be able to interpret the tag. In all of our previous examples, the upstream network port was hard-coded to a particular VLAN. This is the case with the eth0 interface of this server. It is plugged into a port on the switch that is statically configured for VLAN 10. In addition to this, the switch also has an IP interface on VLAN 10, which in our case is 10.10.10.1/24. It acts as the server's default gateway. Frames sent from the servers eth0 interface are received by the switch and end up in VLAN 10. That piece is pretty straightforward.

The other option is to have the server tell the switch what VLAN it wishes to be in. To do this, we create a subinterface on the server that is specific to a given VLAN. Traffic leaving that interface is tagged with the VLAN number and sent on its way to the switch. In order for this to work, the switch port needs to be configured as a trunk. Trunks are interfaces that can carry multiple VLANs and are VLAN tag (dot1q) aware. When the switch receives the frame, it references the VLAN tag in the frame and puts the traffic into the right VLAN based on the tag. Logically, you might depict a trunk configuration as follows:

How to do it…

We depict the eth1 interface as a wide channel that can support connectivity to a large number of VLANs. We can see that the trunk port can connect to all of the possible VLAN interfaces based on the tag it receives. The eth0 interface is statically bound to the VLAN 10 interface.

Note

It is wise in production environments to limit the VLANs allowed on a trunk port. Not doing so would mean someone could potentially gain access to any VLAN on the switch just by specifying the right dot1q tag.

This functionality has been around for a long time, and Linux system administrators are likely familiar with the manual process used to create VLAN tagged subinterfaces. The interesting piece is that Docker can now manage this for you. For instance, we can create two different MacVLAN networks:

user@docker1:~$ docker network create -d macvlan -o parent=eth1.19 
 --subnet=10.10.90.0/24 --gateway=10.10.90.1 vlan19
8f545359f4ca19ee7349f301e5af2c84d959e936a5b54526b8692d0842a94378

user@docker1:~$ docker network create -d macvlan -o parent=eth1.20 
--subnet=192.168.20.0/24 --gateway=192.168.20.1 vlan20
df45e517a6f499d589cfedabe7d4a4ef5a80ed9c88693f255f8ceb91fe0bbb0f
user@docker1:~$

The interfaces are defined much like any other MacVLAN interface. What's different is that we specified the .19 and .20 on the parent interface names. Specifying a dot with numbers after an interface name is the common syntax for defining subinterfaces. If we look at the hosts network interface, we should see the addition of two new interfaces:

user@docker1:~$ ip -d link show
…<Additional output removed for brevity>…
5: eth1.19@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 00:0c:29:50:b8:d6 brd ff:ff:ff:ff:ff:ff promiscuity 0
    vlan protocol 802.1Q id 19 <REORDER_HDR> addrgenmode eui64
6: eth1.20@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
    link/ether 00:0c:29:50:b8:d6 brd ff:ff:ff:ff:ff:ff promiscuity 0
    vlan protocol 802.1Q id 20 <REORDER_HDR> addrgenmode eui64
user@docker1:~$

We can tell from this output that these are either MacVLAN or IPVLAN interfaces whose parent happens to be the physical interface eth1.

If we launch containers on both of these networks, we'll see that they end up within either VLAN 19 or VLAN 20 based on which network we specify:

user@docker1:~$ docker run --net=vlan19 --name=web1 -d 
jonlangemak/web_server_1
7f54eec28098eb6e589c8d9601784671b9988b767ebec5791540e1a476ea5345
user@docker1:~$
user@docker1:~$ docker run --net=vlan20 --name=web2 -d 
jonlangemak/web_server_2
a895165c46343873fa11bebc355a7826ef02d2f24809727fb4038a14dd5e7d4a
user@docker1:~$
user@docker1:~$ docker exec web1 ip addr show dev eth0
7: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 02:42:0a:0a:5a:02 brd ff:ff:ff:ff:ff:ff
    inet 10.10.90.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:aff:fe0a:5a02/64 scope link
       valid_lft forever preferred_lft forever
user@docker1:~$
user@docker1:~$ docker exec web2 ip addr show dev eth0
8: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether 02:42:c0:a8:14:02 brd ff:ff:ff:ff:ff:ff
    inet 192.168.20.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:c0ff:fea8:1402/64 scope link
       valid_lft forever preferred_lft forever
user@docker1:~$

And if we attempt to send traffic to either of their gateways, we'll find that both are reachable:

user@docker1:~$ docker exec -it web1 ping 10.10.90.1 -c 2
PING 10.10.90.1 (10.10.90.1): 48 data bytes
56 bytes from 10.10.90.1: icmp_seq=0 ttl=255 time=0.654 ms
56 bytes from 10.10.90.1: icmp_seq=1 ttl=255 time=0.847 ms
--- 10.10.90.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.654/0.750/0.847/0.097 ms
user@docker1:~$ docker exec -it web2 ping 192.168.20.1 -c 2
PING 192.168.20.1 (192.168.20.1): 48 data bytes
56 bytes from 192.168.20.1: icmp_seq=0 ttl=255 time=0.703 ms
56 bytes from 192.168.20.1: icmp_seq=1 ttl=255 time=0.814 ms
--- 192.168.20.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.703/0.758/0.814/0.056 ms
user@docker1:~$

If we capture the frames as they leave the server, we'll even be able to see the dot1q (VLAN) tag in the layer 2 header:

How to do it…

As with other network constructs Docker creates, Docker will also take care of the cleanup in the case that you delete these user-defined networks. In addition, if you prefer to build the subinterface yourself, Docker can consume interfaces that you have already created so long as the name is the same as the parent you specify.

Being able to specify VLAN tags as part of a user-defined network is a big deal and makes presenting containers to the physical network a much easier task.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset