Now, it's time for us to launch a service and get some containers running on this node.
By clicking on the Services tab, we will be taken to the previous screen, where we can deploy a service.
Now, Tutum offers up three areas to search for the images you might want to use: jumpstarts or collections that they have categorized for you; public repositories on Docker Hub; or private repositories that you have set as private on your Docker Hub account. For our example, we are going to select the tutum/hello-world
example due to its small size.
After clicking the Select button for it, we are taken to a screen similar to the following one; yours will vary depending upon what image you have selected.
Now, you can give the service a name or use the generated one for you. You can also select what tag to use for the image, what your deployment strategy is (if you are using multiple nodes), how many containers to deploy, any tags you wish to add to the containers that will be deployed, custom port settings (if any), and whether it should autorestart in the event of a failure. This should seem familiar as some of these items, such as deployment strategy, were covered in the book, mainly in module 1 with regards to Docker Swarm. So, once you have everything kosher, go ahead and click on the Create and deploy button and prepare for a blast off!
After we click on the button, we are taken to a screen similar to the one we saw when we were deploying our host node.
We can see information on the left-hand side, such as what command the container is running, what ports are exposed, and other settings as well pertaining to the container. We can see that it's in the Starting state and should be running shortly.
Once it has finished starting and is now in the running state, we can manipulate the container and do things such as stop, terminate, redeploy, or even edit the configuration of the container, and expand the number of containers that are running.
Now, let's take a look at the navigation menu for containers.
Again, the Endpoints screenshot will show us any port information pertaining to the running container.
The Logs section will show us a running log of the screen output the container would have.
Since this container just started, we don't have anything yet; but this section can be helpful in the event you need to troubleshoot a running container.
Next, we have the monitoring section that can show us the information we saw before in the Nodes section.
Items such as CPU, Memory, and Bandwidth Out can tell how much our container is being used for the service that it is running.
Next up is the Triggers section. Now, this section can come in handy if you are looking at scaling something based on the CPU usage that a container has.
For example, you could set a trigger that if the CPU usage goes above 60%, launch another container to help with the load (assuming you are running your service in a load balancer).