Overlay network with Docker Machine and Docker Swarm

This section explains the basics of creating a multi-host network. The Docker Engine supports multi-host networking through the overlay network driver. Overlay drivers need the following pre-requisites to work:

  • 3.16 Linux kernel or higher
  • Access to a key-value store
  • Docker supports the following key-value stores: Consul, etcd, and ZooKeeper
  • A cluster of hosts connected to the key-value store
  • Docker Engine daemon on each host in the cluster

This example uses Docker Machine and Docker Swarm to create the multi-network host.

Docker Machine is used to create the key-value store server and the cluster. The cluster created is a Docker Swarm cluster.

The following diagram explains how three VMs are set up using Docker Machine:

Overlay network with Docker Machine and Docker Swarm

Prerequisites

  • Vagrant
  • Docker Engine
  • Docker Machine
  • Docker Swarm

Key-value store installation

An overlay network requires a key-value store. The key-value store stores information about the network state such as discovery, networks, endpoints, IP addresses, and so on. Docker supports various key-value stores such as Consul, etcd, and Zoo Keeper. This section has been implemented using Consul.

The following are the steps to install key-value store:

  1. Provision a VirtualBox virtual machine called mh-keystore.

    When a new VM is provisioned, the process adds the Docker Engine to the host. Consul instance will be using the consul image from the Docker Hub account (https://hub.docker.com/r/progrium/consul/):

    $ docker-machine create -d virtualbox mh-keystore
    Running pre-create checks...
    Creating machine...
    (mh-keystore) Creating VirtualBox VM...
    (mh-keystore) Creating SSH key...
    (mh-keystore) Starting VM...
    Waiting for machine to be running, this may take a few minutes...
    Machine is running, waiting for SSH to be available...
    Detecting operating system of created instance...
    Detecting the provisioner...
    Provisioning with boot2docker...
    Copying certs to the local machine directory...
    Copying certs to the remote machine...
    Setting Docker configuration on the remote daemon...
    Checking connection to Docker...
    Docker is up and running!
    To see how to connect Docker to this machine, run: docker-machine env mh-keystore
    
  2. Start the progrium/consul container created previously running on the mh-keystore virtual machine:
    $ docker $(docker-machine config mh-keystore) run -d 
    >     -p "8500:8500" 
    >     -h "consul" 
    >     progrium/consul -server –bootstrap
    
    Unable to find image 'progrium/consul:latest' locally
    latest: Pulling from progrium/consul
    3b4d28ce80e4: Pull complete
    
    d9125e9e799b: Pull complete
    Digest: sha256:8cc8023462905929df9a79ff67ee435a36848ce7a10f18d6d0faba9306b97274
    Status: Downloaded newer image for progrium/consul:latest
    032884c7834ce22707ed08068c24c503d599499f1a0a58098c31be9cc84d8e6c
    

    A bash expansion $(docker-machine config mh-keystore) is used to pass the connection configuration to the Docker run command. The client starts a program from the progrium/consul image running in the mh-keystore machine. The container is called consul (flag –h) and is listening on port 8500 (you can choose any other port as well).

  3. Set the local environment to the mh-keystore virtual machine:
    $ eval "$(docker-machine env mh-keystore)"
    
  4. Execute the docker ps command to make sure the Consul container is up:
    $ docker ps
    CONTAINER ID      IMAGE            COMMAND               CREATED
    032884c7834c   progrium/consul   "/bin/start -server -"   47 seconds ago
       STATUS          PORTS
    Up 46 seconds  53/tcp, 53/udp, 8300-8302/tcp, 8301-8302/udp, 8400/tcp, 0.0.0.0:8500->8500/tcp
    NAMES
    sleepy_austin
    

Output has been formatted to fit in the page.

Key-value store installation

Create a Swarm cluster with two nodes

In this step, we will use Docker Machine to provision two hosts for your network. We will create two virtual machines in VirtualBox. One of the machines will be Swarm master, which will be created first.

As each host is created, options for the overlay network driver will be passed to the Docker Engine using Swarm using the following steps:

  1. Create a Swarm master virtual machine mhs-demo0:
    $ docker-machine create 
    -d virtualbox 
    --swarm --swarm-master 
    --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" 
    --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" 
    --engine-opt="cluster-advertise=eth1:2376" 
    mhs-demo0
    

    At creation time, you supply the engine daemon with the --cluster-store option. This option tells the engine the location of the key-value store for the overlay network. The bash expansion $(docker-machine ip mh-keystore) resolves to the IP address of the Consul server you created in step 1 of the preceding section. The --cluster-advertise option advertises the machine on the network.

  2. Create another virtual machine mhs-demo1 and add it to the Docker Swarm cluster:
    $ docker-machine create -d virtualbox 
        --swarm 
        --swarm-discovery="consul://$(docker-machine ip mh-keystore):8500" 
        --engine-opt="cluster-store=consul://$(docker-machine ip mh-keystore):8500" 
        --engine-opt="cluster-advertise=eth1:2376" 
    mhs-demo1
    
    Running pre-create checks...
    Creating machine...
    (mhs-demo1) Creating VirtualBox VM...
    (mhs-demo1) Creating SSH key...
    (mhs-demo1) Starting VM...
    Waiting for machine to be running, this may take a few minutes...
    Machine is running, waiting for SSH to be available...
    Detecting operating system of created instance...
    Detecting the provisioner...
    Provisioning with boot2docker...
    Copying certs to the local machine directory...
    Copying certs to the remote machine...
    Setting Docker configuration on the remote daemon...
    Configuring swarm...
    Checking connection to Docker...
    Docker is up and running!
    To see how to connect Docker to this machine, run: docker-machine env mhs-demo1
    
  3. List virtual machines using Docker Machine to confirm that they are all up and running:
    $ docker-machine ls
    
    NAME          ACTIVE   DRIVER       STATE     URL                         SWARM                DOCKER   ERRORS
    mh-keystore   *        virtualbox   Running   tcp://192.168.99.100:2376                        v1.9.1
    mhs-demo0     -        virtualbox   Running   tcp://192.168.99.101:2376   mhs-demo0 (master)   v1.9.1
    mhs-demo1     -        virtualbox   Running   tcp://192.168.99.102:2376   mhs-demo0            v1.9.1
    

    At this point, virtual machines are running. We are ready to create a multi-host network for containers using these virtual machines.

Creating an overlay network

The following command is used to create an overlay network:

$ docker network create --driver overlay my-net

We will only need to create the network on a single host in the Swarm cluster. We used the Swarm master but this command can run on any host in the Swarm cluster:

  1. Check that the overlay network is running using the following command:
    $ docker network ls
    
    bd85c87911491d7112739e6cf08d732eb2a2841c6ca1efcc04d0b20bbb832a33
    rdua1-ltm:overlay-tutorial rdua$ docker network ls
    NETWORK ID          NAME                DRIVER
    bd85c8791149        my-net              overlay
    fff23086faa8        mhs-demo0/bridge    bridge
    03dd288a8adb        mhs-demo0/none      null
    2a706780454f        mhs-demo0/host      host
    f6152664c40a        mhs-demo1/bridge    bridge
    ac546be9c37c        mhs-demo1/none      null
    c6a2de6ba6c9       mhs-demo1/host     host
    

    Since we are using the Swarm master environment, we are able to see all the networks on all the Swarm agents: the default networks on each engine and the single overlay network. In this case, there are two engines running on mhs-demo0 and mhs-demo1.

    Each NETWORK ID is unique.

  2. Switch to each Swarm agent in turn and list the networks:
    $ eval $(docker-machine env mhs-demo0)
    
    $ docker network ls
    NETWORK ID          NAME                DRIVER
    bd85c8791149        my-net              overlay
    03dd288a8adb        none                  null
    2a706780454f        host                  host
    fff23086faa8        bridge              bridge
    
    $ eval $(docker-machine env mhs-demo1)
    $ docker network ls
    
    NETWORK ID          NAME                DRIVER
    bd85c8791149        my-net              overlay
    358c45b96beb        docker_gwbridge     bridge
    f6152664c40a        bridge              bridge
    ac546be9c37c        none                null
    c6a2de6ba6c9        host                host
    

    Both agents report they have the my-net network with the overlay driver. We have a multi-host overlay network running.

    The following figure shows how two containers will have containers created and tied together using the overlay my-net:

    Creating an overlay network
Creating an overlay network
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset