The Docker Swarm topics

There are three advanced topics we will take a look at in this section:

  • Discovery services
  • Advanced scheduling
  • The Docker Swarm API

Discovery services

You can also use services such as etcd, ZooKeeper, consul, and many others to automatically add nodes to your Swarm cluster as well as do other things such as list the nodes or manage them. Let's take a look at consul and how you can use it. This will be the same for each discovery service that you might use. It just involves switching out the word consul with the discovery service you are using.

On each node, you will need to do something different in how you join the machines. Earlier, we did something like this:

$ docker-machine create 
-d virtualbox 
--swarm 
--swarm-discovery token://85b335f95e9a37b679e2ea9e6ad8d6361 
swarm-node1

Now, we would do something similar to the following (based upon the discovery service you are using):

$ docker-machine create 
-d virtualbox 
--swarm 
join --advertise=<swarm-node1_ip:2376> 
consul://<consul_ip> 
swarm-node1

You can now start manage on your laptop or the system that you will be using as the Swarm manager. Before, we would run something like this:

$ docker run --rm swarm manage -H tcp://192.168.99.104:2376 token://85b335f95e9a37b679e2ea9e6ad8d6361

Now, we run this with regards to discovery services:

$ docker run --rm swarm manage -H tcp://192.168.99.104:2376 consul://<consul_ip>

We can also list the nodes in this cluster as well as the discovery service:

$ docker run --rm swarm list -H tcp://192.168.99.104:2376 consul://<consul_ip>

You can easily switch out consul for another discovery service such as etcd or ZooKeeper; the format will still be the same:

$ docker-machine create 
-d virtualbox 
--swarm 
join --advertise=<swarm-node1_ip:2376> 
etcd://<etcd_ip> 
swarm-node1

$ docker-machine create 
-d virtualbox 
--swarm 
join --advertise=<swarm-node1_ip:2376> 
zk://<zookeeper_ip> 
swarm-node1

Advanced scheduling

What is advanced scheduling with regards to Docker Swarm? Docker Swarm allows you to rank nodes within your cluster. It provides three different strategies to do this. These can be used by specifying them with the --strategy switch with the swarm manage command:

  • spread
  • binpack
  • random

spread and binpack use the same strategy to rank your nodes. They are ranked based off of the node's available RAM and CPU as well as the number of containers that it has running on it.

spread will rank the host with less containers higher than a container with more containers (assuming that the memory and CPU values are the same). spread does what the name implies; it will spread the nodes across multiple hosts. By default, spread is used with regards to scheduling.

binpack will try to pack as many containers on as few hosts as possible to keep the number of Swarm hosts to a minimal.

random will do just that—it will randomly pick a Swarm host to place a node on.

The Swarm scheduler comes with a few filters that can be used as well. These can be assigned with the --filter switch with the swarm manage command. These filters can be used to assign nodes to hosts. There are five filters that are associated with it:

  • constraint: There are three types of constraints that can be assigned to nodes:
    • storage=: This is used if you want to specify a node that is put on a host and has SSD drives in it
    • region=: This is used if you want to set a region; mostly used for cloud computing such as AWS or Microsoft Azure
    • environment=: This can set a node to be put into production, development, or other created environments
  • affinity: This filter is used to create attractions between containers. This means that you can specify a filter name and then have all those containers run on the same node.
  • port: The port filter finds a host that has the open port needed for the node to run; it then assigns the node to that host. So, if you have a MySQL instance and need port 3306 open, it will find a host that has port 3306 open and assign the node to that host for operation.
  • dependency: The dependency filter schedules nodes to run on the same host based off of three dependencies:
    • --volumes-from=dependency
    • --link=dependency:<alias>
    • --net=container:dependency
  • health: The health filter is pretty straightforward. It will prevent the scheduling of nodes to run on unhealthy hosts.

The Swarm API

Before we dive into the Swarm API, let's first make sure you understand what an API is. An API is defined as an application programming interface. An API consists of routines, protocols, and tools to build applications. Think of an API as the bricks used to build a wall. This allows you to put the wall together using those bricks. What APIs allow you to do is code in the environment you are comfortable in and reach into other environments to do the work you need. So, if you are used to coding in Python, you can still use Python to do all your work while using the Swarm API to do the work in Swarm that you would like done.

For example, if you wanted to create a container, you would use the following in your code:

POST /containers/create HTTP/1.1
Content-Type: application/json

{
       "Hostname": "",
       "Domainname": "",
       "User": "",
       "AttachStdin": false,
       "AttachStdout": true,
       "AttachStderr": true,
       "Tty": false,
       "OpenStdin": false,
       "StdinOnce": false,
       "Env": null,
       "Cmd": [
               "date"
       ],
       "Entrypoint": "",
       "Image": "ubuntu",
       "Labels": {
               "com.example.vendor": "Acme",
               "com.example.license": "GPL",
               "com.example.version": "1.0"
       },
       "Mounts": [
         {
           "Source": "/data",
           "Destination": "/data",
           "Mode": "ro,Z",
           "RW": false
         }
       ],
       "WorkingDir": "",
       "NetworkDisabled": false,
       "MacAddress": "12:34:56:78:9a:bc",
       "ExposedPorts": {
               "22/tcp": {}
       },
       "HostConfig": {
         "Binds": ["/tmp:/tmp"],
         "Links": ["redis3:redis"],
         "LxcConf": {"lxc.utsname":"docker"},
         "Memory": 0,
         "MemorySwap": 0,
         "CpuShares": 512,
         "CpuPeriod": 100000,
         "CpusetCpus": "0,1",
         "CpusetMems": "0,1",
         "BlkioWeight": 300,
         "MemorySwappiness": 60,
         "OomKillDisable": false,
         "PortBindings": { "22/tcp": [{ "HostPort": "11022" }] },
         "PublishAllPorts": false,
         "Privileged": false,
         "ReadonlyRootfs": false,
         "Dns": ["8.8.8.8"],
         "DnsSearch": [""],
         "ExtraHosts": null,
         "VolumesFrom": ["parent", "other:ro"],
         "CapAdd": ["NET_ADMIN"],
         "CapDrop": ["MKNOD"],
         "RestartPolicy": { "Name": "", "MaximumRetryCount": 0 },
         "NetworkMode": "bridge",
         "Devices": [],
         "Ulimits": [{}],
         "LogConfig": { "Type": "json-file", "Config": {} },
         "SecurityOpt": [""],
         "CgroupParent": ""
      }
  }

You would use the preceding example to create a container; but there are also other things you can do such as inspect containers, get the logs from a container, attach to a container, and much more. Simply put, if you can do it through the command line, there is more than likely something in the API that can be used to tie into to do it through the programming language you are using.

The Docker documentation states that the Swarm API is mostly compatible with the Docker Remote API. Now we could list them out in this section. But seeing that the list could change as things could be added into the Docker Swarm API or removed, I believe, it's best to refer to the link to the Swarm API documentation here instead of listing them out, so the information is not outdated:

https://docs.docker.com/swarm/swarm-api/

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset