Chapter 6. Securing your system: IAM, security groups, and VPC

This chapter covers

  • Keeping your software up to date
  • Controlling access to your AWS account with users and roles
  • Keeping your traffic under control with security groups
  • Using CloudFormation to create a private network
  • Who is responsible for security?

If security is a wall, you’ll need a lot of bricks to build that wall. This chapter focuses on the four most important bricks to secure your systems on AWS:

  • Installing software updates —New security vulnerabilities are found in software every day. Software vendors release updates to fix those vulnerabilities. It’s your job to install those updates as quickly as possible after they’re released. Otherwise, your system will be an easy victim for hackers.
  • Restricting access to your AWS account —This becomes even more important if you aren’t the only one accessing your AWS account (if coworkers and scripts are also accessing it). A script with a bug can easily terminate all your EC2 instances instead of the one you intended. Granting least permissions is key to securing your AWS resources from accidental or intended disastrous actions.
  • Controlling network traffic to and from your EC2 instances —You only want ports to be accessible if they must be. If you run a web server, the only ports you need to open to the outside world are port 80 for HTTP traffic and 443 for HTTPS traffic. Close down all the other ports!
  • Creating a private network in AWS —You can create subnets that aren’t reachable from the internet. And if they’re not reachable, nobody can access them. Nobody? You’ll learn how you can get access to them while preventing others from doing so.
Examples are 100% covered by the Free Tier

The examples in this chapter are completely covered by the Free Tier. As long as you don’t run the examples for longer than a few days, you won’t pay anything. Keep in mind that this only applies if you created a fresh AWS account for this book and nothing else is going on in your AWS account. Try to complete the examples of the chapter within a few days; you’ll clean up your account at the end of each example.

One important brick is missing: securing your self-developed applications. You need to check user input and allow only the necessary characters, don’t save passwords in plain text, use SSL to encrypt traffic between your servers and your users, and so on.

Chapter requirements

To fully understand this chapter, you should be familiar with the following concepts:

  • Subnet
  • Route table
  • Access control list (ACL)
  • Gateway
  • Firewall
  • Port
  • Access management
  • Basics of the Internet Protocol (IP), including IP addresses

6.1. Who’s responsible for security?

AWS is a shared-responsibility environment, meaning responsibility is shared between AWS and you. AWS is responsible for the following:

  • Protecting the network through automated monitoring systems and robust internet access to prevent Distributed Denial of Service (DDoS) attacks
  • Performing background checks on employees who have access to sensitive areas
  • Decommissioning storage devices by physically destroying them after end of life
  • Ensuring physical and environmental security of data centers, including fire protection and security staff

The security standards are reviewed by third parties; you can find an up-to-date overview at http://aws.amazon.com/compliance/.

What are your responsibilities?

  • Encrypting network traffic to prevent attackers from reading or manipulating data (for example, HTTPS)
  • Configuring a firewall for your virtual private network that controls incoming and outgoing traffic with security groups and ACLs
  • Managing patches for the OS and additional software on virtual servers
  • Implementing access management that restricts access to AWS resources like S3 and EC2 to a minimum with IAM

Security in the cloud involves an interaction between AWS and you, the customer. If you play by the rules, you can achieve high security standards in the cloud.

6.2. Keeping your software up to date

Not a week goes by without the release of important updates to fix security vulnerabilities. Sometimes your OS is affected; or software libraries like OpenSSL; or environments like Java, Apache, and PHP; or applications like WordPress. If a security update is released, you must install it quickly, because the exploit may have been released with the update or because everyone can look at the source code to reconstruct the vulnerability. You should have a working plan for how to apply updates to all running servers as quickly as possible.

6.2.1. Checking for security updates

If you log in to an Amazon Linux EC2 instance via SSH, you’ll see the following message of the day:

This example shows that four security updates are available; this number will vary when you look for updates. AWS won’t apply updates for you on your EC2 instances—you’re responsible for doing so.

You can use the yum package manager to handle updates on Amazon Linux. Run yum --security check-update to see which packages require a security update:

We encourage you to follow the Amazon Linux AMI Security Center at https://alas.aws.amazon.com to receive security bulletins affecting Amazon Linux. Whenever a new security update is released, you should check whether you’re affected.

When dealing with security updates, you may face either of these two situations:

  • When the server starts the first time, many security updates need to be installed in order for the server to be up to date.
  • New security updates are released when your server is running, and you need to install these updates while the server is running.

Let’s look how to handle these situations.

6.2.2. Installing security updates on server startup

If you create your EC2 instances with CloudFormation templates, you have three options for installing security updates on startup:

  • Install all updates on server start. Include yum -y update in your user-data script.
  • Install only security updates on server start. Include yum -y --security update in your user-data script.
  • Define the package versions explicitly. Install updates identified by a version number.

The first two options can be easily included in the user data of your EC2 instance. You install all updates as follows:

To install only security updates, do the following:

The problem with installing all updates is that your system becomes unpredictable. If your server was started last week, all updates were applied that were available last week. But in the meantime, new updates have been released. If you start a new server today and install all updates, you’ll end up with a different server than the server from last week. Different can mean that for some reason it’s not working anymore. That’s why we encourage you to explicitly define the updates you want to install. To install security updates with an explicit version, you can use the yum update-to command. yum update-to updates a package to an explicit version instead of the latest:

Using a CloudFormation template to describe an EC2 instance with explicitly defined updates looks like this:

[...]
"Server": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    [...]
    "UserData": {"Fn::Base64": {"Fn::Join": ["", [
      "#!/bin/bash -ex
",
      "yum -y update-to openssl-1.0.1k-1.84.amzn1 unzip-6.0-2.9.amzn1
"
    ]]}}
  }
}
[...]

The same approach works for non-security-related package updates. Whenever a new security update is released, you should check whether you’re affected and modify the user data to keep new systems secure.

6.2.3. Installing security updates on running servers

From time to time, you must install security updates on all your running servers. You could manually log in to all your servers using SSH and run yum -y --security update or yum update-to [...], but if you have many servers or the number of servers grows, this can be annoying. One way to automate this task is to use a small script that gets a list of your servers and executes yum in all of them. The following listing shows how this can be done in Bash. You can find the code in /chapter6/update.sh in the book’s code folder.

Listing 6.1. Installing security updates on all running EC2 instances

Now you can quickly apply updates to all of your running servers.

Some security updates require you to reboot the virtual server—for example, if you need to patch the kernel of your virtual servers running on Linux. You can automate the reboot of the servers or switch to an updated AMI and start new virtual servers instead. For example, a new AMI of Amazon Linux is released four times a year.

6.3. Securing your AWS account

Securing your AWS account is critical. If someone gets access to your AWS account, they can steal your data, destroy everything (data, backups, servers), or steal your identity to do bad stuff. Figure 6.1 shows an AWS account. Each AWS account comes with a root user. In this book’s example, you’re using the root user when you use the Management Console; if you use the CLI, you’re using the mycli user that you created in section 4.2. In addition to the root user, an AWS account is a basket for all the resources you own: EC2 instances, CloudFormation stacks, IAM users, and so on.

Figure 6.1. An AWS account contains all the AWS resources and comes with a root user by default.

To access your AWS account, an attacker must be able to authenticate with your account. There are three ways to do so: using the root user, using a normal user, or authenticating as an AWS resource like an EC2 instance. To authenticate as a (root) user, the attacker needs the password or the access key. To authenticate as an AWS resource like an EC2 server, the attacker needs to send API/CLI requests from that EC2 instance.

In this section, you’ll begin protecting your root user with multifactor authentication (MFA). Then you’ll stop using the root user, create a new user for daily operations, and learn to grant least permissions to a role.

6.3.1. Securing your AWS account’s root user

We advise you to enable multifactor authentication (MFA) for your root user if you’re going to use AWS in production. After MFA is activated, you need a password and a temporary token to log in as the root user. Thus an attacker needs not only your password, but also your MFA device.

Follow these steps to enable MFA, as shown in figure 6.2:

Figure 6.2. Protect your root user with multifactor authentication (MFA).

1.  Click your name in the navigation bar at the top of the Management Console.

2.  Click Security Credentials.

3.  A pop-up may show up the first time. You need to select: Continue to Security Credentials.

4.  Install a MFA app on your smartphone (such as Google Authenticator).

5.  Expand the Multi-Factor Authentication (MFA) section.

6.  Click Activate MFA.

7.  Follow the instructions in the wizard. Use the MFA app on your smartphone to scan the QR code that is displayed.

If you’re using your smartphone as a virtual MFA device, it’s a good idea not to log in to the Management Console from your smartphone or to store the root user’s password on the phone. Keep the MFA token separate from your password.

6.3.2. Identity and Access Management service

The Identity and Access Management (IAM) service provides everything needed for authentication and authorization with the AWS API. Every request you make to the AWS API goes through IAM to check whether the request is allowed. IAM controls who (authentication) can do what (authorization) in your AWS account: who’s allowed to create EC2 instances? Is the user allowed to terminate a specific EC2 instance?

Authentication with IAM is done with users or roles, whereas authorization is done by policies. How do users and roles differ? Table 6.1 shows the differences. Roles authenticate an EC2 instance; a user should be used for everything else.

Table 6.1. Differences between root user, IAM user, and IAM role
 

Root user

IAM user

IAM role

Can have a password Always Yes No
Can have an access key Yes (not recommended) Yes No
Can belong to a group No Yes No
Can be associated with an EC2 instance No No Yes

IAM users and IAM roles use policies for authorization. Let’s look at policies first as we continue with users and roles. Keep in mind that users and roles can’t do anything until you allow certain actions with a policy.

6.3.3. Policies for authorization

A policy is defined in JSON and contains one or more statements. A statement can either allow or deny specific actions on specific resources. You can find an overview of all the actions available for EC2 resources at http://mng.bz/WQ3D. The wildcard character * can be used to create more generic statements.

The following policy has one statement that allows every action for the EC2 service for all resources:

If you have multiple statements that apply to the same action, Deny overrides Allow. The following policy allows all EC2 actions except terminating instances:

The following policy denies all EC2 actions. The ec2:TerminateInstances statement isn’t crucial, because Deny overrides Allow. When you deny an action, you can’t allow that action with another statement:

So far, the Resource part has been ["*"] for every resource. Resources in AWS have an Amazon Resource Name (ARN); figure 6.3 shows the ARN of an EC2 instance.

Figure 6.3. Components of an Amazon Resource Name (ARN) identifying an EC2 instance

To find out the account ID, you can use the CLI:

If you know your account ID, you can use ARNs to allow access to specific resources of a service:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "2",
    "Effect": "Allow",
    "Action": ["ec2:TerminateInstances"],
    "Resource": ["arn:aws:ec2:us-east-1:878533158213:instance/i-3dd4f812"]
  }]
}

There are two types of policies:

  • Managed policy —If you want to create policies that can be reused in your account, a managed policy is what you’re looking for. There are two types of managed policies:

    • AWS managed policy —A policy that is maintained by AWS. There are policies that grant admin rights, read-only rights, and so on.
    • Customer managed —Could be a policy that represents the roles in your organization.
  • Inline policy —A policy that belongs to a certain IAM role, user, or group. The inline policy can’t exist without the IAM role, the user, or the group.

With CloudFormation, it’s easy to maintain inline policies; that’s why we use inline policies most of the time in this book. One exception is the mycli user: this user has the AWS managed policy AdministratorAccess attached.

6.3.4. Users for authentication, and groups to organize users

A user can authenticate with either a password or an access key. When you log in to the Management Console, you’re authenticating with your password. When you use the CLI from your computer, you use an access key to authenticate as the mycli user.

You’re using the root user at the moment to log in to the Management Console. Because using least permissions is always a good idea, you’ll create a new user for the Management Console. To make things easier if you want to add users in the future, you’ll first create a group for all admin users. A group can’t be used to authenticate, but it centralizes authorization. If you want to stop your admin users from terminating EC2 servers, you only need to change the policy for the group instead of changing it for all admin users. A user can be the member of none, one, or multiple groups.

It’s easy to create groups and users with the CLI. Replace $Password with a secure password:

$ aws iam create-group --group-name "admin"
$ aws iam attach-group-policy --group-name "admin" 
--policy-arn "arn:aws:iam::aws:policy/AdministratorAccess"
$ aws iam create-user --user-name "myuser"
$ aws iam add-user-to-group --group-name "admin" --user-name "myuser"
$ aws iam create-login-profile --user-name "myuser" --password "$Password"

The user myuser is ready to be used. But you must use a different URL to access the Management Console if you aren’t using the root user: https://$accountId.signin.aws.amazon.com/console. Replace $accountId with the account ID that you extracted earlier with the aws iam get-user call.

Enabling MFA for IAM users

We encourage you to enable MFA for all users as well. If possible, don’t use the same MFA device for your root user and everyday users. You can buy hardware MFA devices for $13 from AWS partners like Gemalto. To enable MFA for your users, follow these steps:

1.  Open the IAM service in the Management Console.

2.  Choose Users at left.

3.  Select the myuser user.

4.  Click the Manage MFA Device button in the Sign-In Credentials section at the bottom of the page. The wizard is the same as for the root user.

You should have MFA activated for all users who have a password—users who can be used with the Management Console.

Warning

Stop using the root user from now on. Always use myuser and the new link to the Management Console.

Warning

You should never copy a user’s access key to an EC2 instance; use IAM roles instead! Don’t store security credentials in your source code. And never ever check them into your Git or SVN repository. Try to use IAM roles instead whenever possible.

6.3.5. Roles for authentication of AWS resources

An IAM role can be used to authenticate AWS resources like virtual servers. You can attach no roles, one role, or multiple roles to an EC2 instance. Each AWS API request from an AWS resource (like an EC2 instance) will authenticate with the roles attached. If the AWS resource has one role or multiple roles attached, IAM will check all policies attached to those roles to determine whether the request is allowed. By default, EC2 instances have no role and therefore aren’t allowed to make any calls to the AWS API.

Do you remember the temporary EC2 instances from chapter 4? It appeared that temporary servers weren’t terminated—people forget to do so. A lot of money was wasted because of that. You’ll now create an EC2 instance that stops itself after a while. The at command stops the instance after a 5-minute delay:

echo "aws ec2 stop-instances --instance-ids i-3dd4f812" | at now + 5 minutes

The EC2 instance needs permission to stop itself. You can use an inline policy to allow this. The following listing shows how you define a role as a resource in CloudFormation:

To attach an inline role to an instance, you must first create an instance profile:

"InstanceProfile": {
  "Type": "AWS::IAM::InstanceProfile",
  "Properties": {
    "Path": "/",
    "Roles": [{"Ref": "Role"}]
  }
}

Now you can combine the role with the EC2 instance:

"Server": {
  "Type": "AWS::EC2::Instance",
  "Properties": {
    "IamInstanceProfile": {"Ref": "InstanceProfile"},
    [...],
    "UserData": {"Fn::Base64": {"Fn::Join": ["", [
      "#!/bin/bash -ex
",
      "INSTANCEID=`curl -s ",
      "http://169.254.169.254/latest/meta-data/instance-id`
",
      "echo "aws --region us-east-1 ec2 stop-instances ",
      "--instance-ids $INSTANCEID" | at now + 5 minutes
"
    ]]}}
  }
}

Create the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/server.json. You can specify the lifetime of the server via a parameter. Wait until the lifetime is reached and see if your instance is stopped. The lifetime begins when the server is fully started and booted.

Cleaning up

Don’t forget to delete your stack after you finish this section to clean up all used resources. Otherwise you’ll likely be charged for the resources you use.

6.4. Controlling network traffic to and from your virtual server

You only want traffic to enter or leave your EC2 instance that has to do so. With a firewall, you can control ingoing (also called inbound or ingress) and outgoing (also called outbound or egress) traffic. If you run a web server, the only ports you need to open to the outside world are port 80 for HTTP traffic and 443 for HTTPS traffic. All other ports should be closed down. Only open ports that must be open, just as you grant least permissions with IAM. If you have a strict firewall, you shut down a lot of possible security holes. You can also prevent the accidental sending of mail to customers from a test system by not opening outgoing SMTP connections for test systems.

Before network traffic can enter or leave your EC2 instance, it goes through a firewall provided by AWS. The firewall inspects the network traffic and uses rules to decide whether the traffic is allowed or denied.

IP vs. IP address

The abbreviation IP is used for Internet Protocol, whereas an IP address is something like 84.186.116.47.

Figure 6.4 shows how an SSH request from a source IP address 10.0.0.10 is inspected by the firewall and received by the destination IP address 10.10.0.20. In this case, the firewall allows the request because there’s a rule that allows TCP traffic on port 22 between the source and the destination.

Figure 6.4. How an SSH request travels from source to destination, controlled by a firewall

Source vs. destination

Inbound security-group rules filter based on the source of the network traffic. The source is either an IP address or a security group. Thus you can allow inbound traffic only from specific source IP address ranges.

Outbound security-group rules filter based on the destination of the network traffic. The destination is either an IP address or a security group. You can allow outbound traffic to only specific destination IP address ranges.

AWS is responsible for the firewall, but you’re responsible for the rules. By default, all inbound traffic is denied and all outbound traffic is allowed. You can then begin to allow inbound traffic. If you add rules for outgoing traffic, the default will switch from allow all to deny all, and only the exceptions you add will be allowed.

6.4.1. Controlling traffic to virtual servers with security groups

A security group can be associated with AWS resources like EC2 instances. It’s common for EC2 instances to have more than one security group associated with them and for the same security group to be associated with many EC2 instances.

A security group follows a set of rules. A rule can allow network traffic based on the following:

  • Direction (inbound or outbound)
  • IP protocol (TCP, UDP, ICMP)
  • Source/destination IP address
  • Port
  • Source/destination security group (works only in AWS)

You can define rules that allow all traffic to enter and leave your server; AWS won’t prevent you from doing so. But it’s good practice to define your rules so they’re as restrictive as possible.

A security group resource in CloudFormation is of type AWS::EC2::SecurityGroup. The following listing is in /chapter6/firewall1.json in the book’s code folder: the template describes an empty security group associated with a single EC2 instance.

Listing 6.2. Empty security group associated with a single EC2 instance

To explore security groups, you can try the CloudFormation template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall1.json. Create a stack based on that template, and then copy the PublicName from the stack output.

6.4.2. Allowing ICMP traffic

If you want to ping an EC2 instance from your computer, you must allow inbound Internet Control Message Protocol (ICMP) traffic. By default, all inbound traffic is blocked. Try ping $PublicName to make sure ping isn’t working:

$ ping ec2-52-5-109-147.compute-1.amazonaws.com
PING ec2-52-5-109-147.compute-1.amazonaws.com (52.5.109.147): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
[...]

You need to add a rule to the security group that allows inbound traffic, where the protocol equals ICMP. The following listing can be found at /chapter6/firewall2.json in the book’s code folder.

Listing 6.3. Security group that allows ICMP

Update the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall2.json, and retry the ping command. It should now look like this:

$ ping ec2-52-5-109-147.compute-1.amazonaws.com
PING ec2-52-5-109-147.compute-1.amazonaws.com (52.5.109.147): 56 data bytes
64 bytes from 52.5.109.147: icmp_seq=0 ttl=49 time=112.222 ms
64 bytes from 52.5.109.147: icmp_seq=1 ttl=49 time=121.893 ms
[...]
round-trip min/avg/max/stddev = 112.222/117.058/121.893/4.835 ms

Everyone’s inbound ICMP traffic (every source IP address) is now allowed to reach the EC2 instance.

6.4.3. Allowing SSH traffic

Once you can ping your EC2 instance, you want to log in to your server via SSH. To do so, you must create a rule to allow inbound TCP requests on port 22.

Listing 6.4. Security group that allows SSH

Update the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall3.json. You can now log in to your server using SSH. Keep in mind that you still need the correct private key. The firewall only controls the network layer; it doesn’t replace key-based or password-based authentication.

6.4.4. Allowing SSH traffic from a source IP address

So far, you’re allowing inbound traffic on port 22 (SSH) from every source IP address. You can restrict access to only your IP address.

Hard-coding the public IP address into the template isn’t a good solution because this changes from time to time. But you already know the solution: parameters. You need to add a parameter that holds your current public IP address, and you need to modify the AllowInboundSSH rule. You can find the following listing in /chapter6/firewall4.json in the book’s code folder.

Listing 6.5. Security group that allows SSH only from specific IP address

What’s the difference between public and private IP addresses?

On my local network, I’m using private IP addresses that start with 192.168.0.*. My laptop uses 192.168.0.10, and my iPad uses 192.168.0.20. But if I access the internet, I have the same public IP (such as 79.241.98.155) for my laptop and iPad. That’s because only my internet gateway (the box that connects to the internet) has a public IP address, and all requests are redirected by the gateway (if you want to dive deep into this, search for network address translation). Your local network doesn’t know about this public IP address. My laptop and iPad only know that the internet gateway is reachable under 192.168.0.1 on the private network.

To find out your public IP address, visit http://api.ipify.org. For most of us, our public IP address changes from time to time, usually when we reconnect to the internet (which happens every 24 hours in my case).

Update the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall4.json. Type in your public IP address $IPForSSH when asked for parameters. Now only your IP address can open SSH connections to your EC2 instance.

Classless Inter-Domain Routing (CIDR)

You may wonder what /32 means in listing 6.5. To understand what’s going on, you need to switch your brain into binary mode. An IP address is 4 bytes or 32 bits long. The /32 defines how many bits (32, in this case) should be used to form a range of addresses. If you want to define the exact IP address that’s allowed, you must use all 32 bits.

But sometimes it makes sense to define a range of allowed IP addresses. For example, you can use 10.0.0.0/8 to create a range between 10.0.0.0 and 10.255.255.255, 10.0.0.0/16 to create a range between 10.0.0.0 and 10.0.255.255, or 10.0.0.0/24 to create a range between 10.0.0.0 and 10.0.0.255. You aren’t required to use the binary boundaries (8, 16, 24, 32), but they’re easier for most people to understand. You already used 0.0.0.0/0 to create a range that contains every possible IP address.

Now you can control network traffic that comes from outside AWS or goes outside AWS by filtering based on protocol, port, and source IP address.

6.4.5. Allowing SSH traffic from a source security group

If you want to control traffic from one AWS resource (like an EC2 instance) to another, security groups are powerful. You can control network traffic based on whether the source or destination belongs to a specific security group. For example, you can define that a MySQL database can only be accessed if the traffic comes from your web servers, or that only your web cache servers are allowed to access the web servers. Because of the elastic nature of the cloud, you’ll likely deal with a dynamic number of servers, so rules based on source IP addresses are difficult to maintain. This becomes easy if your rules are based on source security groups.

To explore the power of rules based on a source security group, let’s look at the concept of a bastion host for SSH access (some people call it a jump box). The trick is that only one server, the bastion host, can be accessed via SSH from the internet (it should be restricted to a specific source IP address). All other servers can only be reached via SSH from the bastion host. This approach has two advantages:

  • You have only one entry point into your system, and that entry point does nothing but SSH. The chances of this box being hacked are small.
  • If one of your web servers, mail servers, FTP servers, and so on, is hacked, the attacker can’t jump from that server to all the other servers.

To implement the concept of a bastion host, you must follow these two rules:

  • Allow SSH access to the bastion host from 0.0.0.0/0 or a specific source address.
  • Allow SSH access to all other servers only if the traffic source is the bastion host.

Figure 6.5 shows a bastion host with two servers that are only reachable via SSH from the bastion host.

Figure 6.5. The bastion host is the only SSH access point to the system from which you can reach all the other servers via SSH (realized with security groups).

The following listing shows the SSH rule that allows traffic from a specific source security group.

Listing 6.6. Security group that allows SSH from bastion host

Update the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall5.json. If the update is completed, the stack shows three outputs:

  • BastionHostPublicName —Use the bastion host to connect via SSH from your computer.
  • Server1PublicName —You can connect to this server only from the bastion host.
  • Server2PublicName —You can connect to this server only from the bastion host.

Now connect to BastionHostPublicName via SSH using ssh -i $PathToKey/mykey.pem -A ec2-user@$BastionHostPublicName. Replace $PathToKey with the path to your SSH key and $BastionHostPublicName with the public name of the bastion host. The -A option is important to enable AgentForwarding; agent forwarding lets you authenticate with the same key you used to log in to the bastion host for further SSH logins initiated from the bastion host.

Execute the following command to add your key to the SSH agent. Replace $PathToKey with the path to the SSH key:

ssh-add $PathToKey/mykey.pem

6.4.6. Agent forwarding with PuTTY

To make agent forwarding work with PuTTY, you need to make sure your key is loaded to PuTTY Pageant by double-clicking the private key file. You must also enable Connection > SSH > Auth > Allow Agent Forwarding, as shown in figure 6.6.

Figure 6.6. Allow agent forwarding with PuTTY.

From the bastion host, you can then continue to log in to $Server1PublicName or $Server2PublicName:

The bastion host can be used to add a layer of security to your system. If one of your servers is compromised, an attacker can’t jump to other servers in your system. This reduces the potential damage an attacker can inflict. It’s important that the bastion host does nothing but SSH, to reduce the chance of it becoming a security risk. We use the bastion-host pattern frequently to protect our clients.

Cleaning up

Don’t forget to delete your stack after you finish this section to clean up all used resources. Otherwise you’ll likely be charged for the resources you use.

6.5. Creating a private network in the cloud: Virtual Private Cloud (VPC)

By creating a Virtual Private Cloud (VPC), you get your own private network on AWS. Private means you can use the address ranges 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16 to design a network that isn’t necessarily connected to the public internet. You can create subnets, route tables, access control lists (ACLs), and gateways to the internet or a VPN endpoint.

A subnet allows you to separate concerns. Create a new subnet for your databases, web servers, caching servers, or application servers, or whenever you can separate two systems. Another rule of thumb is that you should have at least two subnets: public and private. A public subnet has a route to the internet; a private subnet doesn’t. Your web servers should be in the public subnet, and your database resides in the private subnet.

For the purpose of understanding how a VPC works, you’ll create a VPC to host an enterprise web application. You’ll re-implement the bastion host concept from the previous section by creating a public subnet that contains only the bastion host server. You’ll also create a private subnet for your web servers and one public subnet for your web caches. The web caches absorb most of the traffic by responding with the latest version of the page they have in their cache, and they redirect traffic to the private web servers. You can’t access a web server directly over the internet—only through the web caches.

The VPC uses the address space 10.0.0.0/16. To separate concerns, you’ll create two public subnets and one private subnet in the VPC:

  • 10.0.1.0/24 public SSH bastion host subnet
  • 10.0.2.0/24 public Varnish web cache subnet
  • 10.0.3.0/24 private Apache web server subnet
What does 10.0.0.0/16 mean?

10.0.0.0/16 represents all IP addresses between 10.0.0.0 and 10.0.255.255. It’s using CIDR notation (explained earlier in the chapter).

Network ACLs restrict traffic that goes from one subnet to another like a firewall. The SSH bastion host from section 6.4 can be implemented with these ACLs:

  • SSH from 0.0.0.0/0 to 10.0.1.0/24 is allowed.
  • SSH from 10.0.1.0/24 to 10.0.2.0/24 is allowed.
  • SSH from 10.0.1.0/24 to 10.0.3.0/24 is allowed.

To allow traffic to the Varnish web cache and the HTTP servers, additional ACLs are required:

  • HTTP from 0.0.0.0/0 to 10.0.2.0/24 is allowed.
  • HTTP from 10.0.2.0/24 to 10.0.3.0/24 is allowed.

Figure 6.7 shows the architecture of the VPC.

Figure 6.7. VPC with three subnets to secure a web application

You’ll use CloudFormation to describe the VPC with its subnets. The template is split into smaller parts to make it easier to read in the book. As usual, you’ll find the code in the book’s code repository on GitHub: https://github.com/AWSinAction/code. The template is located at /chapter6/vpc.json.

6.5.1. Creating the VPC and an internet gateway (IGW)

The first resources in the template are the VPC and the internet gateway (IGW). The IGW will translate the public IP addresses of your virtual servers to their private IP addresses using network address translation (NAT). All public IP addresses used in the VPC are controlled by this IGW:

Next you’ll define the subnet for the bastion host.

6.5.2. Defining the public bastion host subnet

The bastion host subnet will only run a single machine to secure SSH access:

The definition of the ACL follows:

There’s an important difference between security groups and ACLs: security groups are stateful, but ACLs aren’t. If you allow an inbound port on a security group, the outbound response that belongs to a request on the inbound port is allowed as well. A security group rule will work as you expect it to. If you open inbound port 22 on a security group, you can connect via SSH.

That’s not true for ACLs. If you open inbound port 22 on an ACL for your subnet, you can’t connect via SSH. In addition, you need to allow outbound ephemeral ports because sshd (SSH daemon) accepts connections on port 22 but uses an ephemeral port for communication with the client. Ephemeral ports are selected from the range starting at 1024 and ending at 65535.

If you want to make a SSH connection from within your subnet, you have to open outbound port 22 and inbound ephemeral ports as well. If you aren’t familiar with all this, you should go with security groups and allow everything on the ACL level.

6.5.3. Adding the private Apache web server subnet

The subnet for the Varnish web cache is similar to the bastion host subnet because it’s also a public subnet; that’s why we’ll skip it. You’ll continue with the private subnet for the Apache web server:

The only difference between a public and a private subnet is that a private subnet doesn’t have a route to the IGW. Traffic between subnets of a VPC is always routed by default. You can’t remove the routes between the subnets. If you want to prevent traffic between subnets in a VPC, you need to use ACLs attached to the subnets.

6.5.4. Launching servers in the subnets

Your subnets are ready and you can continue with the EC2 instances. First you describe the bastion host:

The Varnish server looks similar. But again, the private Apache web server differs in configuration:

You’re now in serious trouble: installing Apache won’t work because your private subnet has no route to the internet.

6.5.5. Accessing the internet from private subnets via a NAT server

Public subnets have a route to the internet gateway. You can use a similar mechanism to provide internet access for private subnets without having a direct route to the internet: use a NAT server in a public subnet, and create a route from your private subnet to the NAT server. A NAT server is a virtual server that handles network address translation. Internet traffic from your private subnet will access the internet from the public IP address of the NAT server.

Warning

Traffic from your EC2 instances to other AWS services that are accessed via the API (Object Store S3, NoSQL database DynamoDB) will go through the NAT instance. This can quickly become a major bottleneck. If your EC2 instances need to communicate heavily with the internet, the NAT instance is most likely not a good idea. Consider launching these instances in a public subnet instead.

To keep concerns separated, you’ll create a new subnet for the NAT server. AWS provides an image (AMI) for a virtual server that has the configuration done for you:

Now you’re ready to create the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/vpc.json. Once you’ve done so, copy the VarnishServerPublicName output and open it in your browser. You’ll see an Apache test page that was cached by Varnish.

Cleaning up

Don’t forget to delete your stack after finishing this section, to clean up all used resources. Otherwise you’ll likely be charged for the resources you use.

6.6. Summary

  • AWS is a shared-responsibility environment in which security can be achieved only if you and AWS work together. You’re responsible for securely configuring your AWS resources and your software running on EC2 instances while AWS protects buildings and host systems.
  • Keeping your software up to date is key and can be automated.
  • The Identity and Access Management (IAM) service provides everything needed for authentication and authorization with the AWS API. Every request you make to the AWS API goes through IAM to check whether the request is allowed. IAM controls who can do what in your AWS account. Grant least permissions to your users and roles to protect your AWS account.
  • Traffic to or from AWS resources like EC2 instances can be filtered based on protocol, port, and source or destination with the help of security groups.
  • A bastion host is a well-defined, single point of access to your system. It can be used to secure SSH access to your servers. Implementation can be done with security groups or ACLs.
  • A VPC is a private network in AWS where you have full control. With VPCs, you can control routing, subnets, ACLs, and gateways to the internet or your company network via VPN.
  • You should separate concerns in your network to reduce potential damage if, for example, one of your subnets is hacked. Keep every system in a private subnet that doesn’t need to be accessed from the public internet, to reduce your attackable surface.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset