This chapter covers
If security is a wall, you’ll need a lot of bricks to build that wall. This chapter focuses on the four most important bricks to secure your systems on AWS:
The examples in this chapter are completely covered by the Free Tier. As long as you don’t run the examples for longer than a few days, you won’t pay anything. Keep in mind that this only applies if you created a fresh AWS account for this book and nothing else is going on in your AWS account. Try to complete the examples of the chapter within a few days; you’ll clean up your account at the end of each example.
One important brick is missing: securing your self-developed applications. You need to check user input and allow only the necessary characters, don’t save passwords in plain text, use SSL to encrypt traffic between your servers and your users, and so on.
To fully understand this chapter, you should be familiar with the following concepts:
AWS is a shared-responsibility environment, meaning responsibility is shared between AWS and you. AWS is responsible for the following:
The security standards are reviewed by third parties; you can find an up-to-date overview at http://aws.amazon.com/compliance/.
What are your responsibilities?
Security in the cloud involves an interaction between AWS and you, the customer. If you play by the rules, you can achieve high security standards in the cloud.
Not a week goes by without the release of important updates to fix security vulnerabilities. Sometimes your OS is affected; or software libraries like OpenSSL; or environments like Java, Apache, and PHP; or applications like WordPress. If a security update is released, you must install it quickly, because the exploit may have been released with the update or because everyone can look at the source code to reconstruct the vulnerability. You should have a working plan for how to apply updates to all running servers as quickly as possible.
If you log in to an Amazon Linux EC2 instance via SSH, you’ll see the following message of the day:
This example shows that four security updates are available; this number will vary when you look for updates. AWS won’t apply updates for you on your EC2 instances—you’re responsible for doing so.
You can use the yum package manager to handle updates on Amazon Linux. Run yum --security check-update to see which packages require a security update:
We encourage you to follow the Amazon Linux AMI Security Center at https://alas.aws.amazon.com to receive security bulletins affecting Amazon Linux. Whenever a new security update is released, you should check whether you’re affected.
When dealing with security updates, you may face either of these two situations:
Let’s look how to handle these situations.
If you create your EC2 instances with CloudFormation templates, you have three options for installing security updates on startup:
The first two options can be easily included in the user data of your EC2 instance. You install all updates as follows:
To install only security updates, do the following:
The problem with installing all updates is that your system becomes unpredictable. If your server was started last week, all updates were applied that were available last week. But in the meantime, new updates have been released. If you start a new server today and install all updates, you’ll end up with a different server than the server from last week. Different can mean that for some reason it’s not working anymore. That’s why we encourage you to explicitly define the updates you want to install. To install security updates with an explicit version, you can use the yum update-to command. yum update-to updates a package to an explicit version instead of the latest:
Using a CloudFormation template to describe an EC2 instance with explicitly defined updates looks like this:
[...] "Server": { "Type": "AWS::EC2::Instance", "Properties": { [...] "UserData": {"Fn::Base64": {"Fn::Join": ["", [ "#!/bin/bash -ex ", "yum -y update-to openssl-1.0.1k-1.84.amzn1 unzip-6.0-2.9.amzn1 " ]]}} } } [...]
The same approach works for non-security-related package updates. Whenever a new security update is released, you should check whether you’re affected and modify the user data to keep new systems secure.
From time to time, you must install security updates on all your running servers. You could manually log in to all your servers using SSH and run yum -y --security update or yum update-to [...], but if you have many servers or the number of servers grows, this can be annoying. One way to automate this task is to use a small script that gets a list of your servers and executes yum in all of them. The following listing shows how this can be done in Bash. You can find the code in /chapter6/update.sh in the book’s code folder.
Now you can quickly apply updates to all of your running servers.
Some security updates require you to reboot the virtual server—for example, if you need to patch the kernel of your virtual servers running on Linux. You can automate the reboot of the servers or switch to an updated AMI and start new virtual servers instead. For example, a new AMI of Amazon Linux is released four times a year.
Securing your AWS account is critical. If someone gets access to your AWS account, they can steal your data, destroy everything (data, backups, servers), or steal your identity to do bad stuff. Figure 6.1 shows an AWS account. Each AWS account comes with a root user. In this book’s example, you’re using the root user when you use the Management Console; if you use the CLI, you’re using the mycli user that you created in section 4.2. In addition to the root user, an AWS account is a basket for all the resources you own: EC2 instances, CloudFormation stacks, IAM users, and so on.
To access your AWS account, an attacker must be able to authenticate with your account. There are three ways to do so: using the root user, using a normal user, or authenticating as an AWS resource like an EC2 instance. To authenticate as a (root) user, the attacker needs the password or the access key. To authenticate as an AWS resource like an EC2 server, the attacker needs to send API/CLI requests from that EC2 instance.
In this section, you’ll begin protecting your root user with multifactor authentication (MFA). Then you’ll stop using the root user, create a new user for daily operations, and learn to grant least permissions to a role.
We advise you to enable multifactor authentication (MFA) for your root user if you’re going to use AWS in production. After MFA is activated, you need a password and a temporary token to log in as the root user. Thus an attacker needs not only your password, but also your MFA device.
Follow these steps to enable MFA, as shown in figure 6.2:
1. Click your name in the navigation bar at the top of the Management Console.
2. Click Security Credentials.
3. A pop-up may show up the first time. You need to select: Continue to Security Credentials.
4. Install a MFA app on your smartphone (such as Google Authenticator).
5. Expand the Multi-Factor Authentication (MFA) section.
6. Click Activate MFA.
7. Follow the instructions in the wizard. Use the MFA app on your smartphone to scan the QR code that is displayed.
If you’re using your smartphone as a virtual MFA device, it’s a good idea not to log in to the Management Console from your smartphone or to store the root user’s password on the phone. Keep the MFA token separate from your password.
The Identity and Access Management (IAM) service provides everything needed for authentication and authorization with the AWS API. Every request you make to the AWS API goes through IAM to check whether the request is allowed. IAM controls who (authentication) can do what (authorization) in your AWS account: who’s allowed to create EC2 instances? Is the user allowed to terminate a specific EC2 instance?
Authentication with IAM is done with users or roles, whereas authorization is done by policies. How do users and roles differ? Table 6.1 shows the differences. Roles authenticate an EC2 instance; a user should be used for everything else.
Root user |
IAM user |
IAM role |
|
---|---|---|---|
Can have a password | Always | Yes | No |
Can have an access key | Yes (not recommended) | Yes | No |
Can belong to a group | No | Yes | No |
Can be associated with an EC2 instance | No | No | Yes |
IAM users and IAM roles use policies for authorization. Let’s look at policies first as we continue with users and roles. Keep in mind that users and roles can’t do anything until you allow certain actions with a policy.
A policy is defined in JSON and contains one or more statements. A statement can either allow or deny specific actions on specific resources. You can find an overview of all the actions available for EC2 resources at http://mng.bz/WQ3D. The wildcard character * can be used to create more generic statements.
The following policy has one statement that allows every action for the EC2 service for all resources:
If you have multiple statements that apply to the same action, Deny overrides Allow. The following policy allows all EC2 actions except terminating instances:
The following policy denies all EC2 actions. The ec2:TerminateInstances statement isn’t crucial, because Deny overrides Allow. When you deny an action, you can’t allow that action with another statement:
So far, the Resource part has been ["*"] for every resource. Resources in AWS have an Amazon Resource Name (ARN); figure 6.3 shows the ARN of an EC2 instance.
To find out the account ID, you can use the CLI:
If you know your account ID, you can use ARNs to allow access to specific resources of a service:
{ "Version": "2012-10-17", "Statement": [{ "Sid": "2", "Effect": "Allow", "Action": ["ec2:TerminateInstances"], "Resource": ["arn:aws:ec2:us-east-1:878533158213:instance/i-3dd4f812"] }] }
There are two types of policies:
With CloudFormation, it’s easy to maintain inline policies; that’s why we use inline policies most of the time in this book. One exception is the mycli user: this user has the AWS managed policy AdministratorAccess attached.
A user can authenticate with either a password or an access key. When you log in to the Management Console, you’re authenticating with your password. When you use the CLI from your computer, you use an access key to authenticate as the mycli user.
You’re using the root user at the moment to log in to the Management Console. Because using least permissions is always a good idea, you’ll create a new user for the Management Console. To make things easier if you want to add users in the future, you’ll first create a group for all admin users. A group can’t be used to authenticate, but it centralizes authorization. If you want to stop your admin users from terminating EC2 servers, you only need to change the policy for the group instead of changing it for all admin users. A user can be the member of none, one, or multiple groups.
It’s easy to create groups and users with the CLI. Replace $Password with a secure password:
$ aws iam create-group --group-name "admin" $ aws iam attach-group-policy --group-name "admin" --policy-arn "arn:aws:iam::aws:policy/AdministratorAccess" $ aws iam create-user --user-name "myuser" $ aws iam add-user-to-group --group-name "admin" --user-name "myuser" $ aws iam create-login-profile --user-name "myuser" --password "$Password"
The user myuser is ready to be used. But you must use a different URL to access the Management Console if you aren’t using the root user: https://$accountId.signin.aws.amazon.com/console. Replace $accountId with the account ID that you extracted earlier with the aws iam get-user call.
We encourage you to enable MFA for all users as well. If possible, don’t use the same MFA device for your root user and everyday users. You can buy hardware MFA devices for $13 from AWS partners like Gemalto. To enable MFA for your users, follow these steps:
1. Open the IAM service in the Management Console.
2. Choose Users at left.
3. Select the myuser user.
4. Click the Manage MFA Device button in the Sign-In Credentials section at the bottom of the page. The wizard is the same as for the root user.
You should have MFA activated for all users who have a password—users who can be used with the Management Console.
Stop using the root user from now on. Always use myuser and the new link to the Management Console.
You should never copy a user’s access key to an EC2 instance; use IAM roles instead! Don’t store security credentials in your source code. And never ever check them into your Git or SVN repository. Try to use IAM roles instead whenever possible.
An IAM role can be used to authenticate AWS resources like virtual servers. You can attach no roles, one role, or multiple roles to an EC2 instance. Each AWS API request from an AWS resource (like an EC2 instance) will authenticate with the roles attached. If the AWS resource has one role or multiple roles attached, IAM will check all policies attached to those roles to determine whether the request is allowed. By default, EC2 instances have no role and therefore aren’t allowed to make any calls to the AWS API.
Do you remember the temporary EC2 instances from chapter 4? It appeared that temporary servers weren’t terminated—people forget to do so. A lot of money was wasted because of that. You’ll now create an EC2 instance that stops itself after a while. The at command stops the instance after a 5-minute delay:
echo "aws ec2 stop-instances --instance-ids i-3dd4f812" | at now + 5 minutes
The EC2 instance needs permission to stop itself. You can use an inline policy to allow this. The following listing shows how you define a role as a resource in CloudFormation:
To attach an inline role to an instance, you must first create an instance profile:
"InstanceProfile": { "Type": "AWS::IAM::InstanceProfile", "Properties": { "Path": "/", "Roles": [{"Ref": "Role"}] } }
Now you can combine the role with the EC2 instance:
"Server": { "Type": "AWS::EC2::Instance", "Properties": { "IamInstanceProfile": {"Ref": "InstanceProfile"}, [...], "UserData": {"Fn::Base64": {"Fn::Join": ["", [ "#!/bin/bash -ex ", "INSTANCEID=`curl -s ", "http://169.254.169.254/latest/meta-data/instance-id` ", "echo "aws --region us-east-1 ec2 stop-instances ", "--instance-ids $INSTANCEID" | at now + 5 minutes " ]]}} } }
Create the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/server.json. You can specify the lifetime of the server via a parameter. Wait until the lifetime is reached and see if your instance is stopped. The lifetime begins when the server is fully started and booted.
Don’t forget to delete your stack after you finish this section to clean up all used resources. Otherwise you’ll likely be charged for the resources you use.
You only want traffic to enter or leave your EC2 instance that has to do so. With a firewall, you can control ingoing (also called inbound or ingress) and outgoing (also called outbound or egress) traffic. If you run a web server, the only ports you need to open to the outside world are port 80 for HTTP traffic and 443 for HTTPS traffic. All other ports should be closed down. Only open ports that must be open, just as you grant least permissions with IAM. If you have a strict firewall, you shut down a lot of possible security holes. You can also prevent the accidental sending of mail to customers from a test system by not opening outgoing SMTP connections for test systems.
Before network traffic can enter or leave your EC2 instance, it goes through a firewall provided by AWS. The firewall inspects the network traffic and uses rules to decide whether the traffic is allowed or denied.
The abbreviation IP is used for Internet Protocol, whereas an IP address is something like 84.186.116.47.
Figure 6.4 shows how an SSH request from a source IP address 10.0.0.10 is inspected by the firewall and received by the destination IP address 10.10.0.20. In this case, the firewall allows the request because there’s a rule that allows TCP traffic on port 22 between the source and the destination.
Inbound security-group rules filter based on the source of the network traffic. The source is either an IP address or a security group. Thus you can allow inbound traffic only from specific source IP address ranges.
Outbound security-group rules filter based on the destination of the network traffic. The destination is either an IP address or a security group. You can allow outbound traffic to only specific destination IP address ranges.
AWS is responsible for the firewall, but you’re responsible for the rules. By default, all inbound traffic is denied and all outbound traffic is allowed. You can then begin to allow inbound traffic. If you add rules for outgoing traffic, the default will switch from allow all to deny all, and only the exceptions you add will be allowed.
A security group can be associated with AWS resources like EC2 instances. It’s common for EC2 instances to have more than one security group associated with them and for the same security group to be associated with many EC2 instances.
A security group follows a set of rules. A rule can allow network traffic based on the following:
You can define rules that allow all traffic to enter and leave your server; AWS won’t prevent you from doing so. But it’s good practice to define your rules so they’re as restrictive as possible.
A security group resource in CloudFormation is of type AWS::EC2::SecurityGroup. The following listing is in /chapter6/firewall1.json in the book’s code folder: the template describes an empty security group associated with a single EC2 instance.
To explore security groups, you can try the CloudFormation template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall1.json. Create a stack based on that template, and then copy the PublicName from the stack output.
If you want to ping an EC2 instance from your computer, you must allow inbound Internet Control Message Protocol (ICMP) traffic. By default, all inbound traffic is blocked. Try ping $PublicName to make sure ping isn’t working:
$ ping ec2-52-5-109-147.compute-1.amazonaws.com PING ec2-52-5-109-147.compute-1.amazonaws.com (52.5.109.147): 56 data bytes Request timeout for icmp_seq 0 Request timeout for icmp_seq 1 [...]
You need to add a rule to the security group that allows inbound traffic, where the protocol equals ICMP. The following listing can be found at /chapter6/firewall2.json in the book’s code folder.
Update the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall2.json, and retry the ping command. It should now look like this:
$ ping ec2-52-5-109-147.compute-1.amazonaws.com PING ec2-52-5-109-147.compute-1.amazonaws.com (52.5.109.147): 56 data bytes 64 bytes from 52.5.109.147: icmp_seq=0 ttl=49 time=112.222 ms 64 bytes from 52.5.109.147: icmp_seq=1 ttl=49 time=121.893 ms [...] round-trip min/avg/max/stddev = 112.222/117.058/121.893/4.835 ms
Everyone’s inbound ICMP traffic (every source IP address) is now allowed to reach the EC2 instance.
Once you can ping your EC2 instance, you want to log in to your server via SSH. To do so, you must create a rule to allow inbound TCP requests on port 22.
Update the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall3.json. You can now log in to your server using SSH. Keep in mind that you still need the correct private key. The firewall only controls the network layer; it doesn’t replace key-based or password-based authentication.
So far, you’re allowing inbound traffic on port 22 (SSH) from every source IP address. You can restrict access to only your IP address.
Hard-coding the public IP address into the template isn’t a good solution because this changes from time to time. But you already know the solution: parameters. You need to add a parameter that holds your current public IP address, and you need to modify the AllowInboundSSH rule. You can find the following listing in /chapter6/firewall4.json in the book’s code folder.
On my local network, I’m using private IP addresses that start with 192.168.0.*. My laptop uses 192.168.0.10, and my iPad uses 192.168.0.20. But if I access the internet, I have the same public IP (such as 79.241.98.155) for my laptop and iPad. That’s because only my internet gateway (the box that connects to the internet) has a public IP address, and all requests are redirected by the gateway (if you want to dive deep into this, search for network address translation). Your local network doesn’t know about this public IP address. My laptop and iPad only know that the internet gateway is reachable under 192.168.0.1 on the private network.
To find out your public IP address, visit http://api.ipify.org. For most of us, our public IP address changes from time to time, usually when we reconnect to the internet (which happens every 24 hours in my case).
Update the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall4.json. Type in your public IP address $IPForSSH when asked for parameters. Now only your IP address can open SSH connections to your EC2 instance.
You may wonder what /32 means in listing 6.5. To understand what’s going on, you need to switch your brain into binary mode. An IP address is 4 bytes or 32 bits long. The /32 defines how many bits (32, in this case) should be used to form a range of addresses. If you want to define the exact IP address that’s allowed, you must use all 32 bits.
But sometimes it makes sense to define a range of allowed IP addresses. For example, you can use 10.0.0.0/8 to create a range between 10.0.0.0 and 10.255.255.255, 10.0.0.0/16 to create a range between 10.0.0.0 and 10.0.255.255, or 10.0.0.0/24 to create a range between 10.0.0.0 and 10.0.0.255. You aren’t required to use the binary boundaries (8, 16, 24, 32), but they’re easier for most people to understand. You already used 0.0.0.0/0 to create a range that contains every possible IP address.
Now you can control network traffic that comes from outside AWS or goes outside AWS by filtering based on protocol, port, and source IP address.
If you want to control traffic from one AWS resource (like an EC2 instance) to another, security groups are powerful. You can control network traffic based on whether the source or destination belongs to a specific security group. For example, you can define that a MySQL database can only be accessed if the traffic comes from your web servers, or that only your web cache servers are allowed to access the web servers. Because of the elastic nature of the cloud, you’ll likely deal with a dynamic number of servers, so rules based on source IP addresses are difficult to maintain. This becomes easy if your rules are based on source security groups.
To explore the power of rules based on a source security group, let’s look at the concept of a bastion host for SSH access (some people call it a jump box). The trick is that only one server, the bastion host, can be accessed via SSH from the internet (it should be restricted to a specific source IP address). All other servers can only be reached via SSH from the bastion host. This approach has two advantages:
To implement the concept of a bastion host, you must follow these two rules:
Figure 6.5 shows a bastion host with two servers that are only reachable via SSH from the bastion host.
The following listing shows the SSH rule that allows traffic from a specific source security group.
Update the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/firewall5.json. If the update is completed, the stack shows three outputs:
Now connect to BastionHostPublicName via SSH using ssh -i $PathToKey/mykey.pem -A ec2-user@$BastionHostPublicName. Replace $PathToKey with the path to your SSH key and $BastionHostPublicName with the public name of the bastion host. The -A option is important to enable AgentForwarding; agent forwarding lets you authenticate with the same key you used to log in to the bastion host for further SSH logins initiated from the bastion host.
Execute the following command to add your key to the SSH agent. Replace $PathToKey with the path to the SSH key:
ssh-add $PathToKey/mykey.pem
To make agent forwarding work with PuTTY, you need to make sure your key is loaded to PuTTY Pageant by double-clicking the private key file. You must also enable Connection > SSH > Auth > Allow Agent Forwarding, as shown in figure 6.6.
From the bastion host, you can then continue to log in to $Server1PublicName or $Server2PublicName:
The bastion host can be used to add a layer of security to your system. If one of your servers is compromised, an attacker can’t jump to other servers in your system. This reduces the potential damage an attacker can inflict. It’s important that the bastion host does nothing but SSH, to reduce the chance of it becoming a security risk. We use the bastion-host pattern frequently to protect our clients.
Don’t forget to delete your stack after you finish this section to clean up all used resources. Otherwise you’ll likely be charged for the resources you use.
By creating a Virtual Private Cloud (VPC), you get your own private network on AWS. Private means you can use the address ranges 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16 to design a network that isn’t necessarily connected to the public internet. You can create subnets, route tables, access control lists (ACLs), and gateways to the internet or a VPN endpoint.
A subnet allows you to separate concerns. Create a new subnet for your databases, web servers, caching servers, or application servers, or whenever you can separate two systems. Another rule of thumb is that you should have at least two subnets: public and private. A public subnet has a route to the internet; a private subnet doesn’t. Your web servers should be in the public subnet, and your database resides in the private subnet.
For the purpose of understanding how a VPC works, you’ll create a VPC to host an enterprise web application. You’ll re-implement the bastion host concept from the previous section by creating a public subnet that contains only the bastion host server. You’ll also create a private subnet for your web servers and one public subnet for your web caches. The web caches absorb most of the traffic by responding with the latest version of the page they have in their cache, and they redirect traffic to the private web servers. You can’t access a web server directly over the internet—only through the web caches.
The VPC uses the address space 10.0.0.0/16. To separate concerns, you’ll create two public subnets and one private subnet in the VPC:
10.0.0.0/16 represents all IP addresses between 10.0.0.0 and 10.0.255.255. It’s using CIDR notation (explained earlier in the chapter).
Network ACLs restrict traffic that goes from one subnet to another like a firewall. The SSH bastion host from section 6.4 can be implemented with these ACLs:
To allow traffic to the Varnish web cache and the HTTP servers, additional ACLs are required:
Figure 6.7 shows the architecture of the VPC.
You’ll use CloudFormation to describe the VPC with its subnets. The template is split into smaller parts to make it easier to read in the book. As usual, you’ll find the code in the book’s code repository on GitHub: https://github.com/AWSinAction/code. The template is located at /chapter6/vpc.json.
The first resources in the template are the VPC and the internet gateway (IGW). The IGW will translate the public IP addresses of your virtual servers to their private IP addresses using network address translation (NAT). All public IP addresses used in the VPC are controlled by this IGW:
Next you’ll define the subnet for the bastion host.
The bastion host subnet will only run a single machine to secure SSH access:
The definition of the ACL follows:
There’s an important difference between security groups and ACLs: security groups are stateful, but ACLs aren’t. If you allow an inbound port on a security group, the outbound response that belongs to a request on the inbound port is allowed as well. A security group rule will work as you expect it to. If you open inbound port 22 on a security group, you can connect via SSH.
That’s not true for ACLs. If you open inbound port 22 on an ACL for your subnet, you can’t connect via SSH. In addition, you need to allow outbound ephemeral ports because sshd (SSH daemon) accepts connections on port 22 but uses an ephemeral port for communication with the client. Ephemeral ports are selected from the range starting at 1024 and ending at 65535.
If you want to make a SSH connection from within your subnet, you have to open outbound port 22 and inbound ephemeral ports as well. If you aren’t familiar with all this, you should go with security groups and allow everything on the ACL level.
The subnet for the Varnish web cache is similar to the bastion host subnet because it’s also a public subnet; that’s why we’ll skip it. You’ll continue with the private subnet for the Apache web server:
The only difference between a public and a private subnet is that a private subnet doesn’t have a route to the IGW. Traffic between subnets of a VPC is always routed by default. You can’t remove the routes between the subnets. If you want to prevent traffic between subnets in a VPC, you need to use ACLs attached to the subnets.
Your subnets are ready and you can continue with the EC2 instances. First you describe the bastion host:
The Varnish server looks similar. But again, the private Apache web server differs in configuration:
You’re now in serious trouble: installing Apache won’t work because your private subnet has no route to the internet.
Public subnets have a route to the internet gateway. You can use a similar mechanism to provide internet access for private subnets without having a direct route to the internet: use a NAT server in a public subnet, and create a route from your private subnet to the NAT server. A NAT server is a virtual server that handles network address translation. Internet traffic from your private subnet will access the internet from the public IP address of the NAT server.
Traffic from your EC2 instances to other AWS services that are accessed via the API (Object Store S3, NoSQL database DynamoDB) will go through the NAT instance. This can quickly become a major bottleneck. If your EC2 instances need to communicate heavily with the internet, the NAT instance is most likely not a good idea. Consider launching these instances in a public subnet instead.
To keep concerns separated, you’ll create a new subnet for the NAT server. AWS provides an image (AMI) for a virtual server that has the configuration done for you:
Now you’re ready to create the CloudFormation stack with the template located at https://s3.amazonaws.com/awsinaction/chapter6/vpc.json. Once you’ve done so, copy the VarnishServerPublicName output and open it in your browser. You’ll see an Apache test page that was cached by Varnish.
Don’t forget to delete your stack after finishing this section, to clean up all used resources. Otherwise you’ll likely be charged for the resources you use.