Deployment automation using Fabric

For very small projects, it may be possible to do deploy your code "by hand", that is, by manually typing the sequence of commands through the remote shell that are necessary to install a new version of code and execute it on a remote shell. Anyway, even for an average-sized project, this is error prone, tedious, and should be considered a waste of most the precious resource you have, your own time.

The solution for that is automation. The simple rule of thumb could be if you needed to perform the same task manually at least twice, you should automate it so you won't need to do it for the third time. There are various tools that allow you to automate different things:

  • Remote execution tools such as Fabric are used for on-demand automated execution of code on multiple remote hosts.
  • Configuration management tools such as Chef, Puppet, CFEngine, Salt, and Ansible are designed for automatized configuration of remote hosts (execution environments). They can be used to set up backing services (databases, caches, and so on), system permissions, users, and so on. Most of them can be used also as a tool for remote execution like Fabric, but depending on their architecture, this may be more or less easy.

Configuration management solutions is a complex topic that deserves a separate book. The truth is that the simplest remote execution frameworks have the lowest entry barrier and are the most popular choice, at least for small projects. In fact, every configuration management tool that provides a way to declaratively specify configuration of your machines has a remote execution layer implemented somewhere deep inside.

Also, depending on some of the tools, thanks to their design, it may not be best suited for actual automated code deployment. One such example is Puppet, which really discourages the explicit running of any shell commands. This is why many people choose to use both types of solution to complement each other: configuration management for setting up system-level environment and on-demand remote execution for application deployment.

Fabric (http://www.fabfile.org/) is so far the most popular solution used by Python developers to automate remote execution. It is a Python library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks. We will focus on it because it is relatively easy to start with. Be aware that, depending on your needs, it may not be the best solution to your problems. Anyway, it is a great example of a utility that can add some automation to your operations, if you don't have any yet.

Tip

Fabric and Python 3

This book encourages you to develop only in Python 3 (if it is possible) with notes about older syntax features and compatibility caveats only to make the eventual version switch a bit more painless. Unfortunately, Fabric, at the time of writing this book, still has not been officially ported to Python 3. Enthusiasts of this tool are being told for at least a few years that there is ongoing Fabric 2 development that will bring a compatibility update. This is said to be a total rewrite with a lot of new features but there is no official open repository for Fabric 2 and almost no one has seen its code. Core Fabric developers do not accept any pull requests for Python 3 compatibility in the current development branch of this project and close every feature request for it. Such an approach to the development of popular open source projects is at best disturbing. The history of this issue does not give us a high chance of seeing the official release of Fabric 2 soon. Such secret development of a new Fabric release raises many questions.

Regardless of anyone's opinions, this fact does not diminish the usefulness of Fabric in its current state. So there are two options if you already decided to stick with Python 3: use a fully compatible and independent fork (https://github.com/mathiasertl/fabric/) or write your application in Python 3 and maintain Fabric scripts in Python 2. The best approach would be to do it in a separate code repository.

You could of course automate all the work using only Bash scripts, but this is very tedious and error-prone. Python has more convenient ways of string processing and encourages code modularization. Fabric is in fact only a tool for gluing execution of commands via SSH, so some knowledge about how the command-line interface and its utilities work in your environment is still required.

To start working with Fabric, you need to install the fabric package (using pip) and create a script named fabfile.py that is usually located in the root of your project. Note that fabfile can be considered a part of your project configuration. So if you want to strictly follow the Twelve-Factor App methodology, you should not maintain its code in the source tree of the deployed application. Complex projects are in fact very often built from various components maintained as separate codebases, so it is another reason why it is a good approach to have one separate repository for all of the project component configurations and Fabric scripts. This makes deployment of different services more consistent and encourages good code reuse.

An example fabfile that defines a simple deployment procedure will look like this:

# -*- coding: utf-8 -*-
import os

from fabric.api import *  # noqa
from fabric.contrib.files import exists


# Let's assume we have private package repository created
# using 'devpi' project
PYPI_URL = 'http://devpi.webxample.example.com'

# This is arbitrary location for storing installed releases.
# Each release is a separate virtual environment directory
# which is named after project version. There is also a
# symbolic link 'current' that points to recently deployed
# version. This symlink is an actual path that will be used
# for configuring the process supervision tool e.g.:
# .
# ├── 0.0.1
# ├── 0.0.2
# ├── 0.0.3
# ├── 0.1.0
# └── current -> 0.1.0/

REMOTE_PROJECT_LOCATION = "/var/projects/webxample"

env.project_location = REMOTE_PROJECT_LOCATION

# roledefs map out environment types (staging/production)
env.roledefs = {
    'staging': [
        'staging.webxample.example.com',
    ],
    'production': [
        'prod1.webxample.example.com',
        'prod2.webxample.example.com',
    ],
}


def prepare_release():
    """ Prepare a new release by creating source distribution and uploading to out private package repository
    """
    local('python setup.py build sdist upload -r {}'.format(
        PYPI_URL
    ))


def get_version():
    """ Get current project version from setuptools """
    return local(
        'python setup.py --version', capture=True
    ).stdout.strip()


def switch_versions(version):
    """ Switch versions by replacing symlinks atomically """
    new_version_path = os.path.join(REMOTE_PROJECT_LOCATION, version)
    temporary = os.path.join(REMOTE_PROJECT_LOCATION, 'next')
    desired = os.path.join(REMOTE_PROJECT_LOCATION, 'current')

    # force symlink (-f) since probably there is a one already
    run(
        "ln -fsT {target} {symlink}"
        "".format(target=new_version_path, symlink=temporary)
    )
    # mv -T ensures atomicity of this operation
    run("mv -Tf {source} {destination}"
        "".format(source=temporary, destination=desired))


@task
def uptime():
    """
    Run uptime command on remote host - for testing connection.
    """
    run("uptime")


@task
def deploy():
    """ Deploy application with packaging in mind """
    version = get_version()
    pip_path = os.path.join(
        REMOTE_PROJECT_LOCATION, version, 'bin', 'pip'
    )

    prepare_release()

    if not exists(REMOTE_PROJECT_LOCATION):
        # it may not exist for initial deployment on fresh host
        run("mkdir -p {}".format(REMOTE_PROJECT_LOCATION))

    with cd(REMOTE_PROJECT_LOCATION):
        # create new virtual environment using venv
        run('python3 -m venv {}'.format(version))

        run("{} install webxample=={} --index-url {}".format(
            pip_path, version, PYPI_URL
        ))


    switch_versions(version)
    # let's assume that Circus is our process supervision tool
    # of choice.
    run('circusctl restart webxample')

Every function decorated with @task is treated as an available subcommand to the fab utility provided with the fabric package. You can list all the available subcommands using the -l or --list switch:

$ fab --list
Available commands:

    deploy  Deploy application with packaging in mind
    uptime  Run uptime command on remote host - for testing connection.

Now you can deploy the application to the given environment type with just a single shell command:

$ fab –R production deploy

Note that the preceding fabfile serves only illustrative purposes. In your own code, you might want to provide extensive failure handling and also try to reload the application without the need to restart the web worker process. Also, some of the techniques presented here may be obvious right now but will be explained later in this chapter. These are:

  • Deploying an application using the private package repository
  • Using Circus for process supervision on the remote host
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset