14

Automating Deployment on AWS

In the previous chapter, we successfully deployed the Django application on an EC2 instance. However, most of the deployment is done manually, and we don’t check for regression when pushing a new version of the application. Interestingly, all the deploying can be automated using GitHub Actions.

In this chapter, we will use GitHub Actions to automatically deploy on an AWS EC2 instance so that you don’t have to do it manually. We will explore how to write a configuration file that will run tests on the code to avoid regressions, and finally connect via Secure Socket Shell (SSH) to a server and execute the script to pull and build the recent version of the code and up the container. To recapitulate, we will cover the following topics:

  • Explaining continuous integration and continuous deployment (CI/CD)
  • Defining the CI/CD workflow
  • What is GitHub Actions?
  • Configuring the backend for automated deployment

Technical requirements

The code for this chapter can be found at https://github.com/PacktPublishing/Full-stack-Django-and-React/tree/chap14. If you are using a Windows machine, ensure that you have the OpenSSH client installed on your machine as we will generate SSH key pairs.

Explaining CI/CD

Before going deeper into GitHub Actions, we must understand the terms CI and CD. In this section, we will understand each term and explain the differences.

CI

CI is a practice of automating the integration of code changes from multiple collaborators into a single project. It also concerns the ability to reliably release changes made to an application at any time. Without CI, we should have to manually coordinate the deployment, the integration of changes into an application, and security and regression checks.

Here’s a typical CI workflow:

  1. A developer creates a new branch from the main branch, makes changes, commits, and then pushes it to the branch.
  2. When the push is done, the code is built, and then automated tests are run.
  3. If the automated tests fail, the developer team is notified, and the next steps (usually deployment) are canceled. If the tests succeed, then the code is ready to be deployed in a staging or production environment.

You can find many tools for CI pipeline configurations. You have tools such as GitHub Actions, Semaphore, Travis CI, and a lot more. In this book, we will use GitHub Actions to build the CI pipeline, and if the CI pipeline passes, we can deploy it on AWS. Let’s now learn more about CD.

CD

CD is related to CI but most of the time represents the next step after a successful CI pipeline passes. The quality of the CI pipeline (builds and tests) will determine the quality of the releases made. With CD, the software is automatically deployed to a staging or production environment once it passes the CI step.

An example of a CD pipeline could look like this:

  1. A developer writes a branch, makes changes and pushes the changes, and then creates a merge request.
  2. Tests and builds are done to make sure there is no regression.
  3. The code is reviewed by another developer, and if the review is done, the merge request is validated and then another suite of tests and builds are done.
  4. After that, the changes are deployed to a staging or production environment.

GitHub Actions and the other tools mentioned for CI also support CD. With a better understanding of CI and CD, let’s define the workflow that we will configure for the backend.

Important note

You will also hear about continuous delivery if you are diving deeper into CI/CD; it is a further extension of continuous deployment. Continuous deployment focuses on the deployment of the servers while continuous delivery focuses on the release and release strategy.

Defining the CI/CD workflow

Before deploying an application as we did in the previous chapter, we need to write off the steps we will follow, along with the tools needed for the deployment. In this chapter, we will automate the deployment of the backend on AWS. Basically, each time we have a push made on the main branch of the repository, the code should be updated on the server and the containers should be updated and restarted.

Again, let’s define the flow, as follows:

  1. A push is made on the principal branch of the server.
  2. Docker containers are built and started to run tests. If the tests fail, the following steps are ignored.
  3. We connect via SSH to the server and run a script to pull the new changes from the remote repository, build the containers, and restart the services using docker-compose.

The following diagram illustrates a typical CI/CD workflow:

Figure 14.1 – CI/CD workflow

Figure 14.1 – CI/CD workflow

That is a lot of things to do manually, and thankfully, GitHub provides an interesting feature called GitHub Actions. Now that we have a better idea about the deployment strategy, let’s explore this feature more.

What is GitHub Actions?

GitHub Actions is a service built and developed by GitHub for automating builds, testing, and deployment pipelines. Using GitHub Actions, we can easily implement the CI/CD workflow shown in Figure 14.1. Before continuing, make sure that your project is hosted on GitHub.

GitHub Actions configurations are made in a file that must be stored in a dedicated directory in the repository called .github/workflows. For a better workflow, we will also use GitHub secrets to store deployment information such as the IP address of the server, the SSH passphrase, and the server username. Let’s start by understanding how to write a GitHub Actions workflow file.

How to write a GitHub Actions workflow file

Workflow files are stored in a dedicated directory called .github/workflows. The syntax used for these files is YAML syntax, hence workflow files have the .yml extension.

Let’s dive deeper into the syntax of a workflow file:

  • name: This represents the name of the workflow. This name is set by placing the following line at the beginning of the file:
    name: Name of the Workflow
  • on: This specifies the events that will trigger the workflow automatically. An example of an event is a push, a pull request, or a fork:
    on: push
  • jobs: This specifies the actions that the workflow will perform. You can have multiple jobs and even have some jobs depending on each other:
    jobs:
     build-test:
       runs-on: ubuntu-latest
       steps:
       - uses: actions/checkout@v2
       - name: Listing files in a directory
         run: ls -a

In our GitHub Actions workflow, we will have two jobs:

  • A job named build-test to build the Docker containers and run the tests inside those containers
  • A job named deploy to deploy the application to the AWS server

The deployment of the application will depend on the failure or success of the build-test job. It’s a good way to prevent code from failing and crashing in the production environment. Now that we understand the GitHub Actions workflow, YAML syntax, and the jobs we want to write for our workflow, let’s write the GitHub Actions file and configure the server for automatic deployment.

Configuring the backend for automated deployment

In the previous sections, we discussed more about the syntax of a GitHub Actions file and the jobs we must write to add CI and CD to the Django application. Let’s write the GitHub Action file and configure the backend for automatic deployment.

Adding the GitHub Actions file

At the root of the project, create a directory called .github, and inside this directory create another directory called workflows. Inside the workflows directory, create a file called ci-cd.yml. This file will contain the YAML configuration for the GitHub action. Let’s start by defining the name and the events that will trigger the running of the workflow:

.github/workflows/ci-cd.yml

name: Build, Test and Deploy Postagram
on:
 push:
   branches: [ main ]

The workflow will run every time there is a push on the main branch. Let’s go on to write a build-test job. For this job, we will follow three steps:

  1. Injecting environment variables into a file. Docker will need a .env file to build the images and start the containers. We’ll inject dummy environment variables into the Ubuntu environment.
  2. After that, we will build the containers.
  3. And finally, we run the tests on the api container.

Let’s get started with the steps:

  1. Let’s start by writing the job and injecting the environment variables:

.github/workflows/ci-cd.yml

build-test:
 runs-on: ubuntu-latest
 steps:
 - uses: actions/checkout@v2
 - name: Injecting env vars
   run: |
     echo "SECRET_KEY=test_foo
           DATABASE_NAME=test_coredb
           DATABASE_USER=test_core
           DATABASE_PASSWORD=12345678
           DATABASE_HOST=test_postagram_db
           DATABASE_PORT=5432
           POSTGRES_USER=test_core
           POSTGRES_PASSWORD=12345678
           POSTGRES_DB=test_coredb
           ENV=TESTING
           DJANGO_ALLOWED_HOSTS=127.0.0.1,localhost
            " >> .env

The tests will probably fail because we haven’t defined the Github Secret called TEST_SECRETS.

Figure 14.2 – Testing Github secrets

Figure 14.2 – Testing Github secrets

  1. Next, let’s add the command to build the containers:

.github/workflows/ci-cd.yml

- name: Building containers
 run: |
   docker-compose up -d --build
  1. And finally, let’s run the pytest command in the api container:

.github/workflows/ci-cd.yml

- name: Running Tests
 run: |
   docker-compose exec -T api pytest

Great! We have the first job of the workflow fully written.

  1. Let’s push the code by running the following command and see how it runs on the GitHub side:
    git push
  2. Go to GitHub to check your repository. You will see an orange badge on the details of the repository, meaning that the workflow is running:
Figure 14.3 – Running GitHub Actions

Figure 14.3 – Running GitHub Actions

  1. Click on the orange badge to have more details about the running workflows. The workflow should pass, and you will have a green status:
Figure 14.4 – Successful GitHub Action job

Figure 14.4 – Successful GitHub Action job

Great! We have the build-test job running successfully, which means that our code can be deployed in a production environment. Before writing the deploy job, let’s configure the server first for automatic deployment.

Configuring the EC2 instance

It’s time to go back to the EC2 instance and make some configurations to ease the automatic deployment. Here’s the list of tasks to do so that GitHub Actions can automatically handle the deployment for us:

  • Generate a pair of SSH keys (private and public keys) with a passphrase.
  • Add the public key to authorized_keys on the server.
  • Add the private key to GitHub Secrets to reuse it for the SSH connection.
  • Register the username used on the OS of the EC2 instance, the IP address, and the SSH passphrase to GitHub Secrets.
  • Add a deploying script on the server. Basically, the script will pull code from GitHub, check for changes, and eventually build and rerun the containers.
  • Wrap everything and add the deploy job.

This looks like a lot of steps, but here’s the good thing: you just need to do that once. Let’s start by generating SSH credentials.

Generating SSH credentials

The best practice for generating SSH keys is to generate the keys on the local machine and not the remote machine. In the next lines, we will use terminal commands. If you are working on a Windows machine, make sure you have the OpenSSH client installed. The following commands are executed on a Linux machine. Let’s get started with the steps:

  1. Open the terminal and enter the following command to generate an RSA key pair:
    ssh-keygen -t rsa -b 4096 -C "[email protected]"
Figure 14.5 – Generating SSH keys

Figure 14.5 – Generating SSH keys

  1. Next, copy the content of the public key and add it to the .ssh/authorized_keys file of the remote EC2 instance. You can just do a copy and paste using the mouse, or you can type the following command:
    cat .ssh/postagramapi.pub | ssh username@hostname_or_ipaddress 'cat >> .ssh/authorized_keys'
  2. Then, copy the content of the private key and add it to GitHub Secrets:
Figure 14.6 – Registering the private key into GitHub Secrets

Figure 14.6 – Registering the private key into GitHub Secrets

You also need to do the same for the passphrase, EC2 server IP address, and username for the OS of the EC2 machine:

Figure 14.7 – Repository secrets

Figure 14.7 – Repository secrets

Great! We have the secrets configured on the repository; we can now write the deploy job on the GitHub action.

Adding a deploying script

The benefit of using GitHub Actions is that you can already find preconfigured GitHub Actions on GitHub Marketplace and just use them instead of reinventing the wheel. For the deployment, we will use the ssh-action GitHub action, which is developed to allow developers to execute remote commands via SSH. This perfectly fits our needs.

Let’s write the deploy job inside our GitHub action workflow and write a deployment script on the EC2 instance:

  1. Inside the .github/workflows/ci-cd.yml file, add the following code at the end of the file:

.github/workflows/ci-cd.yml

deploy:
  name: Deploying on EC2 via SSH
  if: ${{ github.event_name == 'push' }}
  needs: [build-test]
  runs-on: ubuntu-latest
  steps:
  - name: Deploying Application on EC2
    uses: appleboy/ssh-action@master
    with:
      host: ${{ secrets.SSH_EC2_IP }}
      username: ${{ secrets.SSH_EC2_USER }}
      key: ${{ secrets.SSH_PRIVATE_KEY }}
      passphrase: ${{ secrets.SSH_PASSPHRASE }}
      script: |
        cd ~/.scripts
        ./docker-ec2-deploy.sh

The script run on the EC2 instance is the execution of a file called docker-ec2-deploy.sh. This file will contain Bash code for pulling code from the GitHub repository and building the containers.

Let’s connect to the EC2 instance and add the docker-ec2-deploy.sh code.

  1. In the home directory, create a file called docker-ec2-deploy.sh. The process for deployment using Git and Docker will follow these steps:
    1. We must ensure that there are effective changes in the GitHub repository to continue with building and running the containers. It will be a waste of resources and memory to rebuild the containers if the Git pull hasn’t brought new changes. Here’s how we can check this:
    #!/usr/bin/env bash
    TARGET='main'
    cd ~/api || exit
    ACTION_COLOR='33[1;90m'
    NO_COLOR='33[0m'
    echo -e ${ACTION_COLOR} Checking if we are on the target branch
    BRANCH=$(git rev-parse --abbrev-ref HEAD)
    if [ "$BRANCH" != ${TARGET} ]
    then
       exit 0
    fi
    1. Next step, we will do a git fetch command to download content from the GitHub repository:
    # Checking if the repository is up to date.
    git fetch
    HEAD_HASH=$(git rev-parse HEAD)
    UPSTREAM_HASH=$(git rev-parse ${TARGET}@{upstream})
    if [ "$HEAD_HASH" == "$UPSTREAM_HASH" ]
    then
       echo -e "${FINISHED}"The current branch is up to date with origin/${TARGET}."${NO_COLOR}"
         exit 0
    fi

Once this is done, we will then check the repository is up to date by comparing the HEAD hash and the UPSTREAM hash. If they are the same, then the repository is up to date.

  1. If the HEAD and the UPSTREAM hashes are not the same, we pull the latest changes, build the containers, and run the containers:
# If there are new changes, we pull these changes.
git pull origin main;
# We can now build and start the containers
docker compose up -d --build
exit 0;

Great! We can now give execution permission to the script:

chmod +x docker-ec2-deploy.sh

And we are done. You can push the changes made on the GitHub workflow and the automatic deployment job will start.

Important note

Depending on the type of repository (private or public), you might need to enter your GitHub credentials on every remote git command executed such as git push or git pull for example. Ensure you have your credentials configured using SSH or HTTPS. You can check how to do it https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token

Ensure to have a .env file at the root of the project in the AWS server. Here is an example of a .env file you can use for deployment. Don’t forget to change the values of database credentials or secret keys:

SECRET_KEY=foo
DATABASE_NAME=coredb
DATABASE_USER=core
DATABASE_PASSWORD=wCh29&HE&T83
DATABASE_HOST=localhost
DATABASE_PORT=5432
POSTGRES_USER=core
POSTGRES_PASSWORD=wCh29&HE&T83
POSTGRES_DB=coredb
ENV=PROD
DJANGO_ALLOWED_HOSTS=EC2_IP_ADDRESS,EC2_INSTANCE_URL

Ensure to replace the EC2_IP_ADDRESS and the EC2_INSTANCE_URL with the values of your EC2 instance. You will also need to allow TCP connections on port 80 to allow HTTP requests on the EC2 instances for the whole configuration to work.

Figure 14.8 – Allowing HTTP requests

Figure 14.8 – Allowing HTTP requests

You can also remove the 8000 configurations as NGINX handles the redirection of HTTP requests to 0.0.0.0:8000 automatically.

With the concept of CI/CD understood and GitHub Actions explained and written, you have all the tools you need now to automate deployment on EC2 instances and any server. Now that the backend is deployed, we can move on to deploying the React frontend, not on an EC2 instance but on AWS Simple Storage Service (S3).

Summary

In this chapter, we have finally automated the deployment of the Django application on AWS using GitHub Actions. We have explored the concepts of CI and CD and how GitHub Actions allow the configuration of such concepts.

We have written a GitHub action file with jobs to build and run the test suites, and if these steps are successful, we run the deploy job, which is just connecting to the EC2 instance, and run a script to pull changes, build new images, and run the containers.

In the next chapter, we will learn how to deploy the React application using a service such as AWS S3.

Questions

  1. What is the difference between CI and CD?
  2. What are GitHub Actions?
  3. What is continuous delivery?
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset