Launching a stack

This is where it may get confusing. If a service is the same as running container then a stack is running a collection of services like you would launch multiple containers using Docker Compose. In fact, you can launch a stack using a Docker Compose file, with a few additions.

Let's look at launching our Cluster application again. You can find the Docker Compose file we are going to be using in the repo in the /bootcamp/chapter04/cluster/ folder, before we go through the contents of the docker-compose.yml file, let's launch the stack. To do this run the following command:

docker stack deploy --compose-file=docker-compose.yml cluster

You should get confirmation that the network for the stack has been created along with the service. You can list the services launched by the stack by running:

docker stack ls

And then check on the tasks within the service by running:

docker stack ps cluster
Launching a stack

You may be surprised to see that service has launched its tasks on swarm02 and swarm03 only. For an explanation as to why, let's open the docker-compose.yml file:

version: "3"
services:
  cluster:
    image: russmckendrick/cluster
    ports:
      - "80:80"
    deploy:
      replicas: 6
      restart_policy:
        condition: on-failure
placement:
        constraints:
          - node.role == worker

As you can see, the docker-compose.yml file looks like what we covered in Chapter 2, Launching Applications Using Docker, until we get to the deploy section.

You may have already spotted the reason why we only have tasks running on our two worker nodes, as you can see in the placement section, we have told Docker to only launch our tasks on nodes with the role of worker.

Next up we have a defined a restart_policy this tells the Docker what to do should any of the tasks stop responding, in our case we are telling the Docker to restart them on-failure. Finally, we are telling the Docker to launch six replicas within our service.

Let's test that restart policy by terminating one of our two worker nodes. There is a graceful way of doing this by draining the node, however, it more fun to just terminate the node, to do this run the following command:

docker-machine rm swarm03

Running docker stack ps cluster immediately after removing the host shows that the Docker hasn't caught up yet.

Running docker stack ps a few seconds later will show that we still have six tasks running, but as you can see from the terminal output they are now all running on swarm02 and the tasks the new ones have replaced are showing as shutdown.

Our application should still be available by entering the IP address of swarm01 or swarm02 into your browser. Once you have finished with the remain two hosts you can them by running:

docker-machine rm swarm01 swarm02

So far, we have manually created our Docker Swarm cluster in Digital Ocean, I am sure you agree that so far, the process has been straightforward, especially considering how powerful the clustering technology is, you are already probably starting to think how you can start to deploy services and stacks.

In the next few sections we are going to look at Docker for Amazon Web Services and Docker for Azure, and how Docker can take advantage of the range of supporting features provided by the two public cloud services.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset