Execution - EC2 and Lambda

The core of AWS is EC2 (https://aws.amazon.com/ec2/), which lets you create Virtual Machines. Amazon uses the Xen hypervisor (https://www.xenproject.org/) to run Virtual Machines, and Amazon Machine Images (AMIs) to install them.

AWS has a huge list of AMIs you can choose from; you can also create your own AMIs by tweaking an existing AMI. Working with AMIs is quite similar to working with Docker images. Once you have picked an AMI from the Amazon console, you can launch an instance, and, after it has booted, you can use SSH into it and start working.

At any moment, you can snapshot the VM and create an AMI that saves the instance state. This feature is quite useful if you want to manually set up a server, then use it as a basis for deploying clusters.

An EC2 instance comes in different series (https://aws.amazon.com/ec2/instance-types/). The T2, M3, and M4 series are for a general purpose. The T series uses a bursting technology, which boosts the baseline performance of the instance when there's a workload peak.

The C3 and C4 series are for CPU-intensive applications (up to 32 Xeon CPUs), and the X1 and R4 ones have a lot of RAM (up to 1,952 GiB).

Of course, the more RAM or CPU, the more expensive the instance is. For Python microservices, assuming you are not hosting any database on the application instance, a t2.xxx or an m3.xx can be a good choice. You need to avoid the t2.nano or t2.micro though, which are fine for running some testing, but too limited for running anything in production. The size you need to choose depends on the resources taken by the operating system and your application.

However, since we are deploying our microservices as Docker images, we do not need to run a fancy Linux distribution. The only feature that matters is to choose an AMI that's tweaked to run Docker containers.

In AWS, the built-in way to perform Docker deployments is to use the EC2 Container Service (ECS) (https://aws.amazon.com/ecs). ECS offers features that are similar to Kubernetes, and integrates well with other services. ECS uses its own Linux AMI to run Docker containers, but you can configure the service to run another AMI. For instance, CoreOS (https://coreos.com/) is a Linux distribution whose sole purpose is to run Docker containers. If you use CoreOS, that is one part which won't be a locked-in AWS.

Lastly, Lambda (https://aws.amazon.com/lambda/) is a service you can use to trigger the execution of a Lambda Function. A Lambda Function is a piece of code that you can write in Node.js, Java, C#, or Python 2.7 or 3.6, and that is deployed as a deployment package, which is a ZIP file containing your script and all its dependencies. If you use Python, the ZIP file is usually a Virtualenv with all the dependencies needed to run the function.

Lambda functions can replace Celery workers, since they can be triggered asynchronously via some AWS events. The benefit of running a Lambda function is that you do not have to deploy a Celery microservice that needs to run 24/7 to pick messages from a queue. Depending on the message frequency, using Lambda can reduce costs. However, again, using Lambda means you are locked in AWS services.

Let's now look at the storage solutions.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset