Chapter 5
Networking

THE AWS CERTIFIED SYSOPS ADMINISTRATOR - ASSOCIATE EXAM TOPICS COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:

  • Domain 1.0: Monitoring and Metrics
  • images1.1 Demonstrate ability to monitor availability and performance
  • Content may include the following:
    • Using Amazon CloudWatch to monitor Amazon Route 53, Amazon CloudFront, and Elastic Load Balancers
    • Monitoring traffic in your Amazon VPC with Amazon VPC Flow Logs
    • Monitoring & Alarming using Amazon CloudWatch
  • Domain 2.0: High Availability
  • images2.2 Ensure level of fault tolerance based on business needs
    • Deploying resources in multiple subnets within multiple Availability Zones
    • Using Elastic Load balancing to deliver traffic to distributed resources
    • Using Amazon Route 53 to deliver fault tolerant infrastructures
  • Domain 3.0: Analysis
  • images3.1 Optimize the environment to ensure maximum performance
  • Content may include the following:
    • Using Amazon CloudFront to deliver your content at improved performance for your viewers, while minimizing the operational burden and cost of scaling your infrastructure
  • Domain 6.0: Security
  • images6.1 Implement and manage security policies
  • Content may include the following:
    • Controlling access to Amazon VPC, Amazon CloudFront, Amazon Route 53, and AWS Direct Connect
  • Domain 7.0: Networking
  • images7.1 Demonstrate ability to implement networking features of AWS
  • Content may include the following:
    • Public and Private Subnets
    • Route tables
    • Elastic Load Balancing
    • Elastic Network Interfaces (ENI) and Elastic IP (EIP) addresses
  • images7.2 Demonstrate ability to implement connectivity features of AWS
  • Content may include the following:
    • Security groups and network ACL to control traffic into and out of your Amazon VPC
    • Controlling access to the Internet with Internet gateways and Network Address Translation (NAT) gateways
    • Peering Amazon VPCs
    • Using AWS Direct Connect and VPNs to access your Amazon VPC

images

Introduction to Networking on AWS

This chapter introduces you to a number of network services that AWS provides. Some of the services described in this chapter, such as Amazon Virtual Private Cloud (Amazon VPC), are fundamental to the operation of services on AWS. Others, like Amazon Route 53, offer services that, while optional, provide tight integration with AWS products and services.

The primary goal of this book is to prepare you for the AWS Certified SysOps Administrator - Associate exam; however, we want to do more for you. The authors of this book want to provide you with as much information as possible to assist you in your everyday journey as a Systems Operator.

The AWS services covered in this chapter include:

Amazon VPC With Amazon VPC, you provision a logically-isolated section of the AWS Cloud where you launch AWS resources in a virtual network that you have defined. You have complete control over your virtual networking environment.

AWS Direct Connect AWS Direct Connect allows you to establish a dedicated network connection from your premises to AWS.

Elastic Load Balancing Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon Elastic Compute Cloud (Amazon EC2) instances in an AWS Region. You can achieve fault tolerance in your applications, seamlessly providing the required amount of load balancing capacity needed to route application traffic.

Virtual Private Network (VPN) connections With VPN connections, you can connect Amazon VPC to remote networks. You can take advantage of AWS infrastructure to build a highly available, highly scalable solution, or you can build your own solution.

Amazon Route 53 Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. You can use Amazon Route 53 for domain management and for DNS service to connect to both AWS and non-AWS resources.

Amazon CloudFront Amazon CloudFront is a global Content Delivery Network (CDN) service that accelerates delivery of your websites, Application Programming Interfaces (APIs), video content, or other web assets.

Amazon Virtual Private Cloud (Amazon VPC)

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch Amazon Web Services (AWS) resources into a virtual network that you’ve defined. This virtual network closely resembles a traditional network that you’d operate in your own datacenter, with the benefits of using the scalable infrastructure of AWS.

As you get started with Amazon VPC, you should understand the key concepts of this virtual network and how it is similar to or different from your own networks.

Amazon VPC is the networking layer for Amazon EC2. Amazon EC2 was covered in Chapter 4, “Compute.”

Amazon VPC Implementation

There are some decisions that you need to make when implementing an Amazon VPC. These decisions, because they have both design and operational implications, should be thought through thoroughly and made ahead of time. This includes decisions about the following:

  • Region
  • IP address type
  • IP address range
  • Size of Amazon VPC
  • Number of subnets
  • IP address range of subnets
  • Size of subnets

Amazon VPC is a virtual network dedicated to your AWS account. It is logically isolated from other virtual networks in the AWS Cloud. You can launch your AWS resources, such as Amazon EC2 instances, into your VPC. You can configure your VPC; you can select its IP address range, create subnets, and configure route tables, network gateways, and security settings.

A subnet is a range of IP addresses in your Amazon VPC. You can launch AWS resources into a subnet that you select. Use a public subnet for resources that must be connected to the Internet, and a private subnet for resources that won’t be connected to the Internet. Subnets cannot span Availability Zones, so you will need at least one subnet for each Availability Zone in which you plan to establish services. The other consideration is whether to have a public subnet and a private subnet.

Amazon VPCs are region-based services. VPCs cannot span regions. If you have (or want to have) services in two regions, you must have a minimum of two VPCs (one VPC per region). Within a region, you can have up to five VPCs per account.

Regarding IP address type, AWS supports both IPv4 and IPv6 addressing. IPv4 addressing is the default, and it is required on all Amazon VPCs. IPv6 is optional. With IPv4, the addresses are in the private address space (RFC1918), while IPv6 addresses are taken from AWS assigned IPv6 address space. It is important to remember that with IPv6, there is no concept of “private” address space, so all IPv6 addresses are reachable from the Internet. It is recommended that you choose an IPv4 address space that is currently not being used in your on-premises network, and if you’re deploying more than one VPC, those VPCs should use a different address space.

An Amazon VPC can range in size from a /16 to a /28 netmask (approximately 65,532 to 16 IP addresses). With IPv6, AWS assigns a fixed block of /56 addresses. It is recommended that the largest block be chosen (a /16 for IPv4) to maximize room for growth. Now that we have talked about VPCs, you can use the VPC wizard in the AWS Management Console or the AWS CLI to create your own VPC (See Figure 5.1).

Image shows designed program code for CLI used to create VPC.

FIGURE 5.1 AWS CLI used to create an Amazon VPC



Figure 5.2 shows an Amazon VPC with CIDR /22 that includes 1,024 total IPs.

Image shows VPC CIDR/22 is equally divided into four subnets and each has 251 available Ips to form totally 1024 Ips.

FIGURE 5.2 Amazon VPC /22 CIDR block and subnets

Any AWS account created after December 4, 2013, has a default Amazon VPC in each AWS Region. If you do not specify a VPC to launch, your services will be launched into the default VPC.

A default Amazon VPC is set up with the following:

  • A size /16 IPv4 Classless Inter-Domain Routing (CIDR) block (172.31.0.0/16)
  • A default subnet in each Availability Zone (using a /20 subnet mask)
  • An Internet gateway
  • A main route table with a rule that sends all IPv4 traffic destined for the Internet to the Internet gateway
  • A default security group that allows all traffic
  • A default network ACL that allows all traffic
  • A default DHCP option set

You can use a default VPC as you would use any other VPC: You can add subnets, modify the main route table, add route tables, associate additional security groups, update the rules of the default security group, and add VPN connections. You can use a default subnet as you would use any other subnet: You can add custom route tables and set network ACLs. You can also specify a default subnet when you launch an Amazon EC2 instance.

While the default subnet launches with IPv4 addresses, you can optionally associate an IPv6 CIDR block with your default VPC.


Route Tables

When a VPC is created, a route table associated with that VPC is also created. This is called the default route table. The default route table for VPC 10.155.0.0/16 would look like Table 5.1.

TABLE 5.1 Default Route Table for a VPC

Destination Target Status Propagated
10.155.0.0/16 local Active No

What this table is telling us is that any IP packet with a destination address in the 10.155.0.0/16 network (thus any packet with an address between 10.155.0.0 and 10.155.255.255) would be delivered within this VPC. What it also tells us is that any IP packet with any other IP address would be dropped because there are no instructions on how to route that packet.

Route tables can be modified to accommodate other addresses. Table 5.2 demonstrates this concept.

TABLE 5.2 Route Table for VPC with Route to the Internet

Destination Target Status Propagated
10.155.0.0/16 local Active No
0.0.0.0/0 Internet-gateway-id Active No

The route table now has an additional route. All network traffic bound for the 10.155.0.0/16 network would be delivered within this VPC, and all other packets would go to the Internet gateway (igw-5af67a3e). We talk about Internet gateways later in this section.

Network Access Control Lists (ACLs)

A VPC also has a default network Access Control List (ACL). This default network ACL will allow all traffic. You can modify network ACLs either by allowing or denying inbound or outbound traffic based on port, protocol, and IP address. Network ACLs operate much in the same way as firewalls do in a datacenter. You create a number of rules that are evaluated in order, and when a match occurs, that rule gets invoked. Network ACLs were introduced in Chapter 3, “Security and AWS Identity and Access Management (IAM).” Figure 5.3 demonstrates a network Access Control List.

Image shows tabulation of default network access control list with headers summary, inbound rules, outbound rules, subnet associations, and tags. Selection of inbound rules tab showing status.

FIGURE 5.3 Default network Access Control List

Security Groups

A VPC also has a default security group. This default security group allows all traffic. See the VPC diagram shown in Figure 5.4. It too can be modified as needed. Security groups were introduced in Chapter 3 and discussed in depth in Chapter 4.

Image shows VPC: two secured tiers of webserver tier (Nat Gateway) and database server are connected with router and accessed by cloud through Internet gateway.

FIGURE 5.4 VPC diagram

You control routing both within a VPC and routing into and out of a VPC by the use of route tables, network ACLs, and security groups.

One of the other things that is created with a VPC is a Dynamic Host Configuration Protocol (DHCP) option set. The DHCP option set controls various configuration parameters that are passed to compute instances created in the VPC. Some of the parameters are domain name, domain name server, NTP servers, NetBIOS name type, and NetBIOS node type. AWS provides a DNS server by default, but other DNS servers can be used instead. Figure 5.5 shows how to create a DHCP option set.

Image shows screen of create DHCP option set and its contents are name tag, domain name, domain name servers, NTP servers, NetBIOS servers, and so on.

FIGURE 5.5 DHCP option set

When you create subnets within a VPC, those subnets inherit the route table, network ACLs, and security groups of the VPC. If you want to change the routing of packets within a subnet, you need to change the route table, network ACLs, and security groups as required.

Our recommended practice is to create two subnets per Availability Zone: one public subnet and one private subnet. The public subnet’s route table includes a route to the Internet. The private subnet’s route table does not include a route to the Internet. An instance in a public subnet has either a public IP or an Elastic IP address assigned to it, while an instance in a private subnet has only a private IP address assigned to it. For an instance in a private subnet to send a packet to the Internet, that packet must be routed through either a Network Address Translation (NAT) instance or a NAT gateway. For an instance in a private subnet to receive a packet from the Internet, that packet must be routed through either the Classic Load Balancer or Application Load Balancer, or through another compute instance with load balancing software installed. Table 5.3 shows a private subnet with a NAT gateway attached.

TABLE 5.3 Private Subnet Route Table with a NAT Gateway Entry

Destination Target Status Propagated
10.0.0.0/16 local Active No
0.0.0.0/0 nat-gateway-id Active No

A VPC can be created that has no connection to resources outside of the VPC. However, most VPCs have some sort of connection to resources outside the VPC. These connections can take the form of gateways, endpoints, and peering connections.

There are three major gateways for the VPC. They are as follows:

  • Internet gateway
  • NAT gateway
  • VPN gateway

Internet Gateway

An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

An Internet gateway serves two purposes: to provide a target in your VPC route tables for Internet-routable traffic and to perform network address translation (NAT) for instances that have been assigned public IPv4 addresses.

An Internet gateway supports IPv4 and IPv6 traffic. The Internet gateway can be created when the VPC is created, or it can be created at a later time. Figure 5.6 shows how to use the AWS CLI to create an Internet gateway.

Image shows screen of program code developed for CLI to create internet Gateway has contents such as “InternetGateway”: {“Tags”: [], and so on.

FIGURE 5.6 AWS CLI to create Internet gateway

NAT Gateway

You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.

To create a NAT gateway, you must specify the public subnet in which the NAT gateway will reside. You must also specify an Elastic IP address to associate with the NAT gateway when you create it. After you’ve created a NAT gateway, you must update the route table associated with one or more of your private subnets to point Internet-bound traffic to the NAT gateway. This enables instances in your private subnets to communicate with the Internet.

Each NAT gateway is created in a specific Availability Zone and implemented with redundancy in that zone.

VPN Gateway

A VPN gateway is used either to provide (1) an IPsec VPN connection through the Internet to your location or (2) an AWS Direct Connect connection to your location. In the first instance, a VPN gateway is scalable. Two connections are configured to provide availability. When a VPN gateway is used for a VPN connection, it supports both static routing and Border Gateway Protocol (BGP). When it is used for an AWS Direct Connect connection, it supports only BGP. AWS Direct Connect is discussed in greater depth later in this chapter. See Figure 5.7 for a diagram of a VPC with VPN gateway.

Image shows customer network from different locations connected through customer gateways, VPN connection and connecting router through virtual private gateway to secure VPC subnets.

FIGURE 5.7 VPC with VPN gateway

In order for an instance to communicate successfully with services outside the VPC, a public IP address needs to be associated with that instance. That IP address can be assigned to that instance when it is being created or it can be assigned by NAT. This NAT service can be provided either by an Amazon EC2 instance configured as a NAT server or by using a NAT gateway. There will be more coverage of NAT gateways later in this chapter.


VPC Endpoint

Another way to connect to services outside the VPC, and specifically connect to Amazon Simple Storage Service (Amazon S3), is to use a VPC endpoint. A VPC endpoint allows you to have an Amazon S3 bucket become an entry on your route table. This allows you to create a private connection without using the VPN gateway. Table 5.4 demonstrates this concept.

TABLE 5.4 Route Table with VPC Endpoint

Destination Target Status Propagated
10.0.0.0/16 local Active No
0.0.0.0/0 igw-5af67a3e Active No
pl-xxxxxxxx vpce-xxxxxxxx Active No

VPC Peering

You can connect to other VPCs (both VPCs that are under your control and VPCs that are not under your control) by using VPC peering. This allows the other VPC to become another entry in your route table. Table 5.5 demonstrates this concept.

TABLE 5.5 Route Table with VPC Peering

Destination Target Status Propagated
10.0.0.0/16 local Active No
0.0.0.0/0 igw-5af67a3e Active No
192.168.0.0/0 pcx-c37bfaa Active No

An Amazon VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs or with a VPC in another AWS account. In both cases, the VPCs must be in the same region.

AWS uses the existing infrastructure of an Amazon VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and it does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.


Elastic Network Interfaces

An elastic network interface (referred to as a network interface in this book) is a virtual network interface that you can attach to an instance in a VPC. Network interfaces are available only for instances running in a VPC.

A network interface can include the following attributes:

  • A primary private IPv4 address
  • One or more secondary private IPv4 addresses
  • One Elastic IP address (IPv4) per private IPv4 address
  • One public IPv4 address
  • One or more IPv6 addresses
  • One or more security groups
  • A MAC address
  • A source/destination check flag
  • A description

You can create a network interface, attach it to an instance, detach it from an instance, and attach it to another instance. The attributes of a network interface follow it as it’s attached to or detached from an instance and reattached to another instance. When you move a network interface from one instance to another, network traffic is redirected to the new instance.

Every instance in a VPC has a default network interface, called the primary network interface (eth0). You cannot detach a primary network interface from an instance. You can create and attach additional network interfaces. The maximum number of network interfaces that you can use varies by instance type.

When you create a network interface, it inherits the public IPv4 addressing attribute from the subnet. If you later modify the public IPv4 addressing attribute of the subnet, the network interface keeps the setting that was in effect when it was created. If you launch an instance and specify an existing network interface for eth0, the public IPv4 addressing attribute is determined by the network interface.

Additionally, you can associate an IPv6 CIDR block with your VPC and subnet and assign one or more IPv6 addresses from the subnet range to a network interface.

All subnets have a modifiable attribute that determines whether network interfaces created in that subnet (and therefore instances launched into that subnet) are automatically assigned an IPv6 address from the range of the subnet.


Elastic IP Addresses (EIPs)

An Elastic IP address is a static public IPv4 address designed for dynamic cloud computing. An Elastic IP address is associated with your AWS account and allows you to mask the failure of an instance or software by remapping the address to another Amazon EC2 instance in your account.

An Elastic IP address is a public IPv4 address, which is reachable from the Internet. If your instance does not have a public IPv4 address, you can associate an Elastic IP address with your instance to enable communication with the Internet; for example, to connect to your instance from your local computer.

Currently, IPv6 is not supported on an Elastic IP address.

The following are important facts about the use of an Elastic IP address:

  • To use an Elastic IP address, you first allocate one to your account and then associate it with your instance or a network interface.
  • When you associate an Elastic IP address with an instance or its primary network interface, the instance’s public IPv4 address (if one is assigned) is released back into Amazon’s pool of public IPv4 addresses.
  • You can disassociate an Elastic IP address from a resource and re-associate it with a different resource.
  • A disassociated Elastic IP address remains allocated to your account until you explicitly release it.
  • While your instance is running, you are not charged for one Elastic IP address associated with the instance, but you are charged for any additional Elastic IP addresses associated with the instance.
  • There is a small hourly charge if an Elastic IP address is not associated with a running instance, or if it is associated with a stopped instance or an unattached network interface.
  • An Elastic IP address is a regional construct.
  • When you associate an Elastic IP address with an instance that previously had a public IPv4 address, the public DNS host name of the instance changes to match the Elastic IP address.

Amazon VPC Management

You can create and manage VPCs by using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or AWS Software Development Kit (SDK). The two major tools used to manage and troubleshoot VPC issues are as follows:

  • AWS CloudTrail
  • Amazon VPC Flow Logs

With AWS CloudTrail, you can log various API calls and track activity that way. With Amazon VPC Flow Logs, you can capture information about IP traffic going to and from network interfaces in your VPC. AWS CloudTrail was discussed in Chapter 3. As this is the networking chapter, we discuss Amazon VPC Flow Logs in greater detail.

Amazon VPC Flow Logs

Amazon VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

Flow logs can help you with a number of tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn can help you diagnose overly restrictive security group rules. You can also use flow logs as a security tool to monitor the traffic that is reaching your instance.

You can create a flow log for a VPC, a subnet, or a network interface. If you create a flow log for a subnet or VPC, each network interface in the VPC or subnet is monitored. Flow log data is published to a log group in CloudWatch Logs, and each network interface has a unique log stream. Log streams contain flow log records, which are log events consisting of fields that describe the traffic for that network interface.


To create a flow log, you specify the resource for which you want to create the flow log, the type of traffic to capture (accepted traffic, rejected traffic, or all traffic), the name of a log group in CloudWatch Logs to which the flow log will be published, and the Amazon Resource Name (ARN) of an IAM role that has sufficient permission to publish the flow log to the CloudWatch Logs log group. If you specify the name of a log group that does not exist, AWS will attempt to create the log group for you. After you’ve created a flow log, it can take several minutes to begin collecting data and publishing to CloudWatch Logs. Flow logs do not capture real-time log streams for your network interfaces.

You can create multiple flow logs that publish data to the same log group in CloudWatch Logs. If the same network interface is present in one or more flow logs in the same log group, it has one combined log stream. If you’ve specified that one flow log should capture rejected traffic and the other flow log should capture accepted traffic, then the combined log stream captures all traffic.

If you launch more instances into your subnet after you’ve created a flow log for your subnet or VPC, then a new log stream is created for each new network interface as soon as any network traffic is recorded for that network interface.

You can create flow logs for network interfaces that are created by other AWS services; for example, Elastic Load Balancing, Amazon RDS, Amazon ElastiCache, Amazon Redshift, and Amazon WorkSpaces. However, you cannot use these services’ consoles or APIs to create the flow logs. You must use the Amazon EC2 console or the Amazon EC2 API. Similarly, you cannot use the CloudWatch Logs console or API to create log streams for your network interfaces.


AWS Direct Connect

AWS Direct Connect links your internal network to an AWS Direct Connect location over a standard 1-gigabit or 10-gigabit Ethernet fiber-optic cable. One end of the cable is connected to your router, and the other end is connected to an AWS Direct Connect router. With this connection in place, you can create virtual interfaces directly to public AWS Cloud services (for example, to Amazon S3) or to Amazon VPC, bypassing Internet service providers in your network path. An AWS Direct Connect location provides access to AWS in the region with which it is associated.

There are a number of reasons to use AWS Direct Connect instead of the Internet to access your AWS resources:

Reduce your bandwidth costs. For bandwidth-heavy workloads that you want to run on AWS, AWS Direct Connect reduces your network data out charges in two ways. First, by transferring data to and from AWS directly, you can reduce your bandwidth commitment to your Internet service provider. Second, all data transferred over your dedicated connection is charged at the reduced AWS Direct Connect data transfer rate instead of Internet data transfer rates.

Achieve consistent network performance. Network latency over the Internet can vary given that the Internet is constantly changing how data gets from point A to B. With AWS Direct Connect, you choose the data that uses the dedicated connection and how that data is routed, which can provide a more consistent network experience than Internet-based connections.

Private connectivity to AWS You can use AWS Direct Connect to establish a private virtual interface from your on-premises network directly to your Amazon VPC, providing you with a private, high-bandwidth network connection between your network and your VPC. With multiple virtual interfaces, you can even establish private connectivity to multiple VPCs while maintaining network isolation.

Elasticity and scaling AWS Direct Connect provides 1 Gbps and 10 Gbps connections, and you can easily provision multiple connections if you need more capacity. You can also use AWS Direct Connect instead of establishing a VPN connection over the Internet to your Amazon VPC, avoiding the need to use VPN hardware that frequently can’t support data transfer rates above 4 Gbps.

AWS Direct Connect Implementation

You can set up an AWS Direct Connect connection in one of the following ways:

  • At an AWS Direct Connect location
  • Through a member of the AWS Partner Network (APN) or a network carrier
  • Through a hosted connection provided by a member of the APN

Virtual Interfaces

To route traffic over your AWS Direct Connect connection, one or more Virtual Interfaces (VIFs) need to be created. If you have traffic that is coming from your location and bound to compute instances in your VPC, you need to create a private VIF. If the traffic is for either AWS Cloud services outside of a VPC or to the Internet, you need to create a public VIF. If you have traffic going to both, then you need to create both a private VIF and a public VIF. The components of a VIF are out of scope for the AWS Certified SysOps Exam; however, this book is meant to do more that prepare you for the exam.

Each VIF has the following components:

Virtual Local Area Network (VLAN) ID Each VIF must have its own unique VLAN ID. This has to be a value between 1 and 4094.

Peer IP address AWS Direct Connect supports both IPv6 and public and private IPv4 addresses. You can also dual stack the interface. For a public VIF, you need to specify IPv4 /30 addresses that you own. For a private VIF, private IPv4 /30 addresses are used. For IPv6, Amazon will allocate to you /125 IPv6 addresses.

BGP is the only supported routing protocol. For a private VIF, a private Autonomous System Number (ASN) is used. For a public VIF, a public ASN is used, and it is one that you must own.

Routes that you will advertise over BGP AWS Direct Connect can advertise up to 1,000 prefixes for a public VIF and 100 prefixes for a private VIF.

As previously mentioned, AWS Direct Connect uses a VPN gateway to connect to your AWS infrastructure. Therefore, you need to have a VPC configured with a VPN gateway attached.

Getting Started with AWS Direct Connect

Now we discuss the steps involved to get started with AWS Direct Connect. For the exam, understand the concepts of the public and private VIF that were discussed previously. The seven steps required to get started with Direct Connect are listed here:

  1. Sign Up for Amazon Web Services. To use AWS Direct Connect, you need an AWS account if you don’t already have one.
  2. Submit AWS Direct Connect Connection Request. You can submit a connection request using the AWS Direct Connect console. Before you begin, ensure that you have the following information:
    1. The port speed that you require: 1 Gbps or 10 Gbps. You cannot change the port speed after you’ve created the connection request.
    2. The AWS Direct Connect location to which to connect.


  3. Download the LOA-CFA. AWS makes a Letter of Authorization and Connecting Facility Assignment (LOA-CFA) available to you for download, or it emails you with a request for more information after you’ve created the connection request. If you receive a request for more information, you must respond within seven days or the connection is deleted. The LOA-CFA is the authorization to connect to AWS, and it is required by the colocation provider or your network provider to establish the cross-network connection (cross-connect).

    After you’ve downloaded the LOA-CFA, do one of the following:

    • If you’re working with a network provider, send the LOA-CFA to your network provider so that they can order a cross-connect for you. You cannot order a cross-connect for yourself in the AWS Direct Connect location if you do not have equipment there. Your network provider does this for you.
    • If you have equipment at the AWS Direct Connect location, contact the colocation provider to request a cross-network connection. You must be a customer of the colocation provider, and you must present them with the LOA-CFA that authorizes the connection to the AWS router, as well as the necessary information to connect to your network.


  4. (Optional) Configure Redundant Connections. To provide for failover, we recommend that you request and configure two dedicated connections to AWS. These connections can terminate on one or two routers in your network.

    There are different configuration choices available when you provision two dedicated connections: Active/Active and Active/Passive.


  5. Create a Virtual Interface. After you have placed an order for an AWS Direct Connect connection, you must create a virtual interface to begin using it. You can create a private virtual interface to connect to your VPC, or you can create a public virtual interface to connect to AWS services that aren’t in a VPC. We will not discuss the specific steps needed, as it is out of the scope of this book. Visit http://docs.aws.amazon.com/directconnect/latest/UserGuide/getting_started.html.


  6. Download Router Configuration. After you have created a virtual interface for your AWS Direct Connect connection, you can download the router configuration file.
  7. Verify Your Virtual Interface. After you have established virtual interfaces to the AWS Cloud or to Amazon VPC, you can verify your AWS Direct Connect connection.

AWS Direct Connect Management

You can use both the AWS Management Console and the AWS CLI to create and work with AWS Direct Connect. You can use tags on your AWS Direct Connect resources to categorize and manage those resources.

High Availability

AWS Direct Connect is not a highly available service by default. While it uses a VPN gateway, which is highly available, it is a single circuit, which is not. In order to achieve high availability with AWS Direct Connect, you need to have multiple AWS Direct Connect connections, and it is recommended to have those connections in different AWS Regions.


Route Preference

While it is beyond the scope of this book to discuss this in detail, you should be aware that when faced with a packet that has multiple routes, AWS will choose routes in this order:

  1. Local routes within the VPC
  2. Longest prefix match first (this would be on the route tables)
  3. Static routes
  4. AWS Direct Connect BGP
  5. VPN static routes (defined on a VPN connection)
  6. VPN BGP

Route preference is important to be aware of when using a VPN connection to back up an AWS Direct Connect connection or the inverse. VPN can be a backup solution to the AWS Direct Connect. If you’re using VPN and the AWS Direct Connect becomes available, traffic should start taking advantage of the AWS Direct Connect immediately.


AWS Direct Connect Security

Security is implemented in a number of ways with AWS Direct Connect. Each VIF has Layer 3 isolation from other VIFs because each operates in its own separate VLAN. In addition, it is possible to encrypt data in transit by implementing an IPsec VPN on the AWS Direct Connect connection.

Just like the other AWS services, IAM is used to control what roles, users, or groups can execute what APIs. IAM’s role is in the creation and administration of AWS Direct Connect resource. Amazon CloudWatch can be used to capture API calls made by AWS Direct Connect.


Load Balancing

Elastic Load Balancing distributes incoming application traffic across multiple Amazon EC2 instances, in multiple Availability Zones within a single AWS Region. Elastic Load Balancing supports two types of load balancers:

  • Application Load Balancers
  • Classic Load Balancers

The Elastic Load Balancing load balancer serves as a single target, which increases the availability of your application. You can add and remove instances from your load balancer as your needs change without disrupting the overall flow of requests to your application. Elastic Load Balancing, working in conjunction with Auto Scaling, can scale your load balancer as traffic to your application changes over time. Refer to Chapter 10, “High Availability,” for details on how Auto Scaling works.

You can configure health checks and send requests only to the healthy instances. You can also offload the work of encryption and decryption to your load balancer. See Table 5.6 for differences between the Classic Load Balancer and the Application Load Balancer.

TABLE 5.6 Classic Load Balancer and Application Load Balancer

Feature Classic Load Balancer Application Load Balancer
Protocols HTTP, HTTPS, TCP, SSL HTTP, HTTPS
Platforms Amazon EC2-Classic, Amazon EC2-VPC Amazon EC2-VPC
Sticky sessions (cookies) Load balancer generated
Idle connection timeout
Connection draining
Cross-zone load balancing Can be enabled Always enabled
Health checks Improved
Amazon CloudWatch metrics Improved
Access logs Improved
Host-based routing
Path-based routing
Route to multiple ports on a single instance
HTTP/2 support
WebSocket support
Load balancer deletion protection

Load Balancing Implementation

Because the Classic Load Balancer and the Application Load Balancer distribute traffic in different ways, implementation is different. Let’s start with the Classic Load Balancer. Both Load Balancers will be examined in this section of the chapter.

Classic Load Balancer

With a Classic Load Balancer, there are a number of parameters that you need to configure. These include the following:

  • Choose VPC
  • Choose subnets
  • Define protocols and ports
  • Choose Internet facing or internal
  • Determine security groups
  • Configure health check
  • Assign Amazon EC2 instances

If you have multiple VPCs, you determine which VPC will contain the Application Load Balancer. Load balancers cannot be shared among VPCs. From there, you are given a list of Availability Zones in which you have created subnets. In order for your application to be considered highly available, you must specify at least two Availability Zones. If you need additional Availability Zones, you need to create subnets in those Availability Zones.

High Availability

You can distribute incoming traffic across your Amazon EC2 instances in a single Availability Zone or multiple Availability Zones. The Classic Load Balancer automatically scales its request handling capacity in response to incoming application traffic.

Health Checks

The Classic Load Balancer can detect the health of Amazon EC2 instances. When it detects unhealthy Amazon EC2 instances, it no longer routes traffic to those instances and spreads the load across the remaining healthy instances.

Security Features

When using Amazon Virtual Private Cloud (Amazon VPC), you can create and manage security groups associated with Classic Load Balancers to provide additional networking and security options. You can also create a Classic Load Balancer without public IP addresses to serve as an internal (non-Internet-facing) load balancer.

SSL Offloading

Classic Load Balancers support SSL termination, including offloading SSL decryption from application instances, centralized management of SSL certificates, and encryption to backend instances with optional public key authentication.

Flexible cipher support allows you to control the ciphers and protocols the load balancer presents to clients.

Sticky Sessions

Classic Load Balancers support the ability to stick user sessions to specific EC2 instances using cookies. Traffic will be routed to the same instances as the user continues to access your application.

IPv6 Support

Classic Load Balancers support the use of Internet Protocol versions 4 and 6 (IPv4 and IPv6). IPv6 support is currently unavailable for use in VPC.

Layer 4 or Layer 7 Load Balancing

You can load balance HTTP/HTTPS applications and use layer 7-specific features, such as X-Forwarded and sticky sessions. You can also use strict layer 4 load balancing for applications that rely purely on the TCP protocol.

Operational Monitoring

Classic Load Balancer metrics, such as request count and request latency, are reported by Amazon CloudWatch.

Logging

Use the Access Logs feature to record all requests sent to your load balancer and store the logs in Amazon S3 for later analysis. The logs are useful for diagnosing application failures and analyzing web traffic.

You can use AWS CloudTrail to record classic load balancer API calls for your account and deliver log files. The API call history enables you to perform security analysis, resource change tracking, and compliance auditing.


Application Load Balancer

With an Application Load Balancer, there are a number of parameters that you need to configure. These include the following:

  • Name of your load balancer
  • Security groups
  • The VPC
  • Availability Zones to be used
  • Internet-facing or internal
  • IP addressing scheme to be used
  • Configure one or more listeners
  • Configure listener rules
  • Define target group

Table 5.6 demonstrates the features of the Application Load Balancer. In order to become a proficient Systems Operator on AWS, you need to know what these features do. The features of the Application Load Balancer are defined in the following sections.

Content-Based Routing

If your application is composed of several individual services, an Application Load Balancer can route a request to a service based on the content of the request.

Host-Based Routing

You can route a client request based on the Host field of the HTTP header, allowing you to route to multiple domains from the same load balancer.

Path-Based Routing

You can route a client request based on the URL path of the HTTP header.

Containerized Application Support

You can now configure an Application Load Balancer to load balance containers across multiple ports on a single Amazon EC2 instance. Amazon EC2 Container Service (Amazon ECS) allows you to specify a dynamic port in the ECS task definition, giving the container an unused port when it is scheduled on the EC2 instance. The ECS scheduler automatically adds the task to the ELB using this port.

HTTP/2 Support

HTTP/2 is a new version of the Hypertext Transfer Protocol (HTTP) that uses a single, multiplexed connection to allow multiple requests to be sent on the same connection. It also compresses header data before sending it out in binary format, and it supports TLS connections to clients.

WebSockets Support

WebSockets allows a server to exchange real-time messages with end users without the end users having to request (or poll) the server for an update. The WebSockets protocol provides bi-directional communication channels between a client and a server over a long-running TCP connection.

Native IPv6 Support

Application Load Balancers support native Internet Protocol version 6 (IPv6) in a VPC. This will allow clients to connect to the Application Load Balancer via IPv4 or IPv6.

Sticky Sessions

Sticky sessions are a mechanism to route requests from the same client to the same target. The Application Load Balancer supports sticky sessions using load balancer generated cookies. If you enable sticky sessions, the same target receives the request and can use the cookie to recover the session context. Stickiness is defined at a target group level.

Health Checks

An Application Load Balancer routes traffic only to healthy targets. With an Application Load Balancer, you get improved insight into the health of your applications in two ways:

  1. Health check improvements that allow you to configure detailed error codes from 200-399. The health checks allow you to monitor the health of each of your services behind the load balancer.
  2. New metrics that give insight into traffic for each of the services running on an Amazon EC2 instance.

High Availability

An Application Load Balancer requires you to specify more than one Availability Zone. You can distribute incoming traffic across your targets in multiple Availability Zones. An Application Load Balancer automatically scales its request-handling capacity in response to incoming application traffic.

Layer-7 Load Balancing

You can load balance HTTP/HTTPS applications and use layer 7-specific features, such as X-Forwarded-For (XFF) headers.

HTTPS Support

An Application Load Balancer supports HTTPS termination between the clients and the load balancer. Application Load Balancers also offer management of SSL certificates through AWS Identity and Access Management (IAM) and AWS Certificate Manager for predefined security policies.

Operational Monitoring

Amazon CloudWatch reports Application Load Balancer metrics, such as request counts, error counts, error types, and request latency.

Logging

You can use the Access Logs feature to record all requests sent to your load balancer and store the logs in Amazon S3 for later analysis. The logs are compressed and have a .gzip file extension. The compressed logs save both storage space and transfer bandwidth, and they are useful for diagnosing application failures and analyzing web traffic.

You can also use AWS CloudTrail to record Application Load Balancer API calls for your account and deliver log files. The API call history enables you to perform security analysis, resource change tracking, and compliance auditing.

Delete Protection

You can enable deletion protection on an Application Load Balancer to prevent it from being accidentally deleted.

Request Tracing

The Application Load Balancer injects a new custom identifier “X-Amzn-Trace-Id” HTTP header on all requests coming into the load balancer. Request tracing allows you to track a request by its unique ID as the request makes its way across various services that make up your websites and distributed applications. You can use the unique trace identifier to uncover any performance or timing issues in your application stack at the granularity of an individual request.

AWS Web Application Firewall

You can now use AWS WAF to protect your web applications on your Application Load Balancers. AWS WAF is a web application firewall that helps protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources.

Load Balancing Management

Elastic Load Balancing is a highly available and scalable service.

You can create, access, and manage your load balancers using any of the following interfaces:

AWS Management Console Provides a web interface that you can use to access Elastic Load Balancing.

AWS CLI Provides commands for a broad set of AWS Cloud services, including Elastic Load Balancing, and it is supported on Windows, Mac, and Linux.

AWS SDKs Provides language-specific APIs and takes care of many of the connection details, such as calculating signatures, handling request retries, and error handling.

Query API Provides low-level API actions that you call using HTTPS requests. Using the Query API is the most direct way to access Elastic Load Balancing, but it requires that your application handle low-level details such as generating the hash to sign the request and error handling.

There are three tools for managing both Classic Load Balancers and Application Load Balancers. These are Amazon CloudWatch, AWS CloudTrail, and access logs. In addition, Application Load Balancers has a feature that allows request tracing to track HTTP requests from clients to targets.

Amazon CloudWatch

Amazon CloudWatch provides a number of metrics available to track the performance of the Elastic Load Balancing load balancers. These metrics include number of health hosts, average latency, number of requests, and number of connections, among others. These metrics are updated on a 60-second interval.

AWS CloudTrail

AWS CloudTrail can capture the API calls used by the Elastic Load Balancing load balancers and deliver those logs to an Amazon S3 bucket that you specify. From there you can analyze the logs as needed. The information these logs will include are things like the time the API was invoked, what invoked the API, and the parameters of the request.

Access Logs

Access logs are an optional feature of Elastic Load Balancing. Access logs capture greater detailed information than AWS CloudTrail does, including such information as latencies, request paths, IP addresses of clients making the requests, and the instance processing the request, among other information.

Request Tracing

Request tracing is used to track HTTP requests from clients to targets or other services. When the load balancer receives a request from a client, it adds or updates the X-Amzn-Trace-Id header before sending the request to the target. Any services or applications between the load balancer and the target can also add or update this header. The contents of the header are logged in access logs.

Load Balancing Security

As mentioned earlier, security groups control the traffic allowed to and from your load balancer. You can choose the ports and protocols to allow for both inbound and outbound traffic. You apply the security group to the load balancer, so it is applied to all interfaces.

The rules associated with your load balancer security group must allow traffic in both directions. These rules need to be applied to both the listener and the health check ports. When you add a listener to a load balancer or update the health check port for a target group, you need to review your security group rules to ensure that they allow traffic on the new port in both directions.

Authentication and Access Control for Your Load Balancers

AWS uses security credentials to identify you and to grant you access to your AWS resources. You can use features of AWS Identity and Access Management (IAM) to allow other users, services, and applications to use your AWS resources fully or in a limited way, without sharing your security credentials.

By default, IAM users don’t have permission to create, view, or modify AWS resources. To allow an IAM user to access resources, such as a load balancer, and perform tasks, you must create an IAM policy that grants the IAM user permission to use the specific resources and API actions they’ll need and then attach the policy to the IAM user or the group to which the IAM user belongs. When you attach a policy to a user or group of users, it allows or denies the users permission to perform the specified tasks on the specified resources.

For example, you can use IAM to create users and groups under your AWS account (an IAM user can be a person, a system, or an application). Then you grant permissions to the users and groups to perform specific actions on the specified resources using an IAM policy.

Virtual Private Network (VPN)

You can connect your VPC to remote networks by using a VPN connection. There are a number ways to create this VPN connection. Your options are as follows:

  • Virtual Private Gateway (VGW)
  • AWS VPN CloudHub
  • Software VPN

VPN Installation

To implement a Virtual Private Network, create a VGW and then attach it to a VPC. While you can have up to five VGWs per AWS Region, you can only have one VGW attached to a VPC. A VGW can be used for VPNs using IPsec and AWS Direct Connect.

A VGW can support both static routing and BGP. With static routing and with BGP, the IP ranges of the customer location and the VPC must be unique.

CloudHub

AWS VPN CloudHub also uses a VGW and, using a hub-and-spoke model, connects multiple customer gateways. AWS VPN CloudHub uses BGP with each customer location having a unique ASN.

Software VPN

Creating a software VPN involves spinning up one or more Amazon EC2 instances in one or more Availability Zones within the region and then loading VPN software onto those Amazon EC2 instances. The VPN software can be acquired directly by the customer, or it can be acquired via AWS Marketplace. AWS Marketplace offers a number of options, including OpenVPN, Cisco, Juniper, and Brocade, among others.

VPN Management

VGW is highly available and scalable. Each VGW comes with two publicly accessible IP addresses. This means that a VGW sets up two separate IPsec tunnels. You need to provision two public IP addresses on your side. These can be on a single customer gateway, two customer gateways at the same location, or two customer gateways in two different locations. You can connect multiple customer gateways to the same VGW.

AWS VPN CloudHub is highly available and scalable. With AWS VPN CloudHub, each location advertises their appropriate routes over their VPN connection. AWS VPN CloudHub receives these advertisements and re-advertises them out to the other customer gateways. This allows each site to send and receive data from the other customer sites.

Creating a software VPN gives you both the greatest level of control and the greatest level of responsibility. You spin up the instance or instances and are responsible for their placement, that they are the correct size (and can increase in size or number to meet increased demand), and that they are monitored and replaced if either is not working or working with reduced functionality.


Amazon Route 53

Amazon Route 53 is a highly available and scalable cloud DNS web service. DNS routes end users to Internet applications by translating names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other. Amazon Route 53 is fully compliant with IPv6.

You can use Amazon Route 53 to help you get a website or web application up and running. Amazon Route 53 enables you to perform three main functions:

Register domain names. Your website needs a name, such as example.com. Amazon Route 53 lets you register a name for your website or web application, known as a domain name.

Route Internet traffic to the resources for your domain. When a user opens a web browser and enters your domain name in the address bar, Amazon Route 53 helps the DNS connect the browser with your website or web application.

Check the health of your resources. Amazon Route 53 sends automated requests over the Internet to a resource, such as a web server, to verify that it is reachable, available, and functional. You also can choose to receive notifications when a resource becomes unavailable and choose to route Internet traffic away from unhealthy resources.

You can use any combination of these functions. For example, you can use Amazon Route 53 both to register your domain name and to route Internet traffic for the domain, or you can use Amazon Route 53 to route Internet traffic for a domain that you registered with another domain registrar. If you choose to use Amazon Route 53 for all three functions, you register your domain name, then configure Amazon Route 53 to route Internet traffic for your domain, and finally configure Amazon Route 53 to check the health of your resources. You can use Amazon Route 53 to manage both public and private hosted zones. So, you can use Amazon Route 53 to distribute traffic between multiple AWS Regions and to distribute traffic within an AWS Region.

Amazon Route 53 Implementation

In addition to registering new domains, you can transfer existing domains. When you register a domain with Amazon Route 53, a hosted zone is automatically created for that domain. This makes it easier to use Amazon Route 53 as the DNS service provider for this domain. You are, however, not obligated to use Amazon Route 53 as the DNS service provider. You may route your DNS queries to another DNS provider.

When you’re using Amazon Route 53 as the DNS service provider, you need to configure the DNS service. As mentioned, when you use Amazon Route 53 as the domain registrar, a hosted zone is automatically created for you. If you are not using Amazon Route 53 as the domain registrar, then you will need to create a hosted zone.

A hosted zone contains information about how you want to route your traffic both for your domain and for any subdomains that you may have. Amazon Route 53 assigns a unique set of nameservers for each hosted zone that you create. You can use this set of nameservers for multiple hosted zones if you want.


After you have created your hosted zones, you need to create resource record sets. This involves two parts: the record type and the routing policy. Different routing policies can be applied to different record types.

A routing policy determines how Amazon Route 53 responds to queries. There are five routing policies available, and each of them is explained in this chapter.

Simple Routing

Use a simple routing policy when you have a single resource that performs a given function for your domain; for example, one web server that serves content for the example.com website. In this case, Amazon Route 53 responds to DNS queries based only on the values in the resource record set; for example, the IP address in an A record.

Weighted Routing

Use the weighted routing policy when you have multiple resources that perform the same function (for example, web servers that serve the same website) and you want Amazon Route 53 to route traffic to those resources in proportions that you specify (for example, one quarter to one server and three quarters to the other). Figure 5.8 demonstrates weighted routing.

Image shows Amazon Route 53 weighted routing shifts traffic from blue system to new green System by 20:80 through load balancer when user access through internet.

FIGURE 5.8 Weighted routing

Latency-Based Routing

Use the latency routing policy when you have resources in multiple Amazon EC2 datacenters that perform the same function and you want Amazon Route 53 to respond to DNS queries with the resources that provide the best latency. For example, you might have web servers for example.com in the Amazon EC2 datacenters in Ireland and in Tokyo. When a user browses to example.com, Amazon Route 53 chooses to respond to the DNS query based on which datacenter gives your user the lowest latency. Figure 5.9 demonstrates this concept.

Map shows weighted routing with two web servers (at North America and Australia) and user access locations (China, Asia Pac, EU, South America, US-West, and so on).

FIGURE 5.9 Latency-based routing

Geolocation Routing

Use the geolocation routing policy when you want Amazon Route 53 to respond to DNS queries based on the location of your users. Geolocation routing returns the resource based on the geographic location of the user. You can specify geographic locations by continent, country, or state within the United States.


Failover Routing

When using a failover routing policy, you designate a primary resource and a secondary resource. The secondary resource takes over in the event of a failure of the primary resource. To accomplish this, you configure a health check for the primary resource record set. If the health check fails, Amazon Route 53 routes the traffic to the secondary resource. It is recommended, but not obligatory, to configure a health check for the secondary resource. If both record sets are unhealthy, Amazon Route 53 returns the primary resource record set. Health checks are discussed in greater detail in Chapter 10.


DNS Record Types

Explaining the various DNS records types is out of the scope of this book. However, Table 5.7 shows the supported record types for Amazon Route 53.

TABLE 5.7 Amazon Route 53 Supported DNS Record Types

Record Type Description
A Address mapping records
AAAA IPv6 address records
CNAME Canonical name records
MX Mail exchanger record
NAPTR Name authority pointer record
NS Name server records
PTR Reverse-lookup Pointer records
SOA Start of authority records
SPF Sender policy framework record
SRV Service record
TXT Text records

In addition to the standard DNS record types supported, Amazon Route 53 supports a record type called Alias. An Alias record type, instead of pointing to an IP address or a domain name, points to one of the following:

  • An Amazon CloudFront distribution
  • An AWS Elastic Beanstalk environment
  • An Elastic Load Balancing Classic or Application Load Balancer
  • An Amazon S3 bucket that is configured as a static website
  • Another Amazon Route 53 resource record set in the same hosted zone

Health Checks

There are three types of health checks that you can configure with Amazon Route 53. They are as follows:

  • The health of a specified resource, such as a web server
  • The status of an Amazon CloudWatch alarm
  • The status of other health checks

In this section, we explore each type. The level of detail covered may not be tested on the exam. However, as an AWS Certified Systems Operator, the material covered here is a must-know.

The health of a specified resource, such as a web server You can configure a health check that monitors an endpoint that you specify either by IP address or by domain name. At regular intervals that you specify, Amazon Route 53 submits automated requests over the Internet to your application, server, or other resource to verify that it’s reachable, available, and functional. Optionally, you can configure the health check to make requests similar to those that your users make, such as requesting a web page from a specific URL.

The status of an Amazon CloudWatch alarm You can create CloudWatch alarms that monitor the status of CloudWatch metrics, such as the number of throttled read events for an Amazon DynamoDB database or the number of Elastic Load Balancing hosts that are considered healthy. After you create an alarm, you can create a health check that monitors the same data stream that CloudWatch monitors for the alarm.

To improve resiliency and availability, Amazon Route 53 doesn’t wait for the CloudWatch alarm to go into the ALARM state. The status of a health check changes from healthy to unhealthy based on the data stream and on the criteria in the CloudWatch alarm. The status of a health check can change from healthy to unhealthy even before the state of the corresponding alarm has changed to ALARM in CloudWatch.

The status of other health checks You can create a health check that monitors whether Amazon Route 53 considers other health checks healthy or unhealthy. One situation where this might be useful is when you have multiple resources that perform the same function, such as multiple web servers, and your chief concern is whether some minimum number of your resources is healthy. You can create a health check for each resource without configuring notification for those health checks. Then you can create a health check that monitors the status of the other health checks and that notifies you only when the number of available web resources drops below a specified threshold.

Amazon Route 53 Management

You can access Amazon Route 53 in the following ways:

  • AWS Management Console
  • AWS SDKs
  • Amazon Route 53 API
  • AWS CLI
  • AWS Tools for Windows PowerShell

The best tool for monitoring the status of your domain is the Amazon Route 53 dashboard. This dashboard will give a status of any new domain registrations, domain transfers, and any domains approaching expiration.

The tools used to monitor your DNS service with Amazon Route 53 are health checks, Amazon CloudWatch, and AWS CloudTrail. Health checks are discussed in the management section. Amazon CloudWatch monitors metrics like the number of health checks listed as healthy, the length of time an SSL handshake took, and the time it took for the health check to receive the first byte, among other metrics. AWS CloudTrail can capture all of the API requests made for Amazon Route 53. You can determine the user who invoked a particular API.

Amazon Route 53 Authentication and Access Control

To perform any operation on Amazon Route 53 resources, such as registering a domain or updating a resource record set, IAM requires you to authenticate to prove that you’re an approved AWS user. If you’re using the Amazon Route 53 console, you authenticate your identity by providing your AWS user name and a password. If you’re accessing Amazon Route 53 programmatically, your application authenticates your identity for you by using access keys or by signing requests.

After you authenticate your identity, IAM controls your access to AWS by verifying that you have permissions to perform operations and to access resources. If you are an account administrator, you can use IAM to control the access of other users to the resources that are associated with your account.

Amazon CloudFront

Amazon CloudFront is a web service that speeds up distribution of your static and dynamic web content—for example, .html, .css, .php, image, and media files—to end users. Amazon CloudFront delivers your content through a worldwide network of edge locations.

When an end user requests content that you’re serving with Amazon CloudFront, the user is routed to the edge location that provides the lowest latency, so content is delivered with the best possible performance. If the content is already in that edge location, Amazon CloudFront delivers it immediately. If the content is not currently in that edge location, Amazon CloudFront retrieves it from an Amazon S3 bucket or an HTTP server (for example, a web server) that you have identified as the source for the definitive version of your content.

Amazon CloudFront can be used to distribute static content, dynamic content, and streaming media.

Amazon CloudFront Implementation

When implementing Amazon CloudFront, the first step is to configure your origin servers, from which Amazon CloudFront gets your files for distribution from Amazon CloudFront edge locations all over the world. An origin server stores the original, definitive version of your objects.

If you’re serving content over HTTP, your origin server is either an Amazon S3 bucket or an HTTP server, such as a web server. Your HTTP server can run on an Amazon EC2 instance or on a server that you manage; these servers are also known as custom origins. If you distribute media files on demand using the Adobe Media Server Real-Time Messaging Protocol (RTMP), your origin server is always an Amazon S3 bucket.

The next step is to upload your files to your origin servers. Your files, also known as objects, typically include web pages, images, and media files, but they can be anything that can be served over HTTP or a supported version of Adobe RTMP, the protocol used by Adobe Flash Media Server.

The final step is to create an Amazon CloudFront distribution, which tells Amazon CloudFront which origin servers to get your files from when users request the files through your website or application. In addition, you can configure your origin server to add headers to the files; the headers indicate how long you want the files to stay in the cache in Amazon CloudFront edge locations.

At this point, Amazon CloudFront assigns a domain name to your new distribution and displays it in the Amazon CloudFront console or returns it in the response to a programmatic request. You can also configure your Amazon CloudFront distribution so that you can use your own domain name.

Now that we have discussed (at a very high level) the steps required to Implement an Amazon CloudFront distribution, let’s talk about how that distribution is configured. The following are features that relate to Amazon CloudFront web distributions.

Cache Behaviors

A cache behavior is the set of rules that you configure for a given URL pattern based on file extensions, file names, or any portion of a URL path on your website (for example, *.jpg). You can configure multiple cache behaviors for your web distribution. Amazon CloudFront will match incoming viewer requests with your list of URL patterns, and if there is a match, the service will honor the cache behavior that you configure for that URL pattern. Each cache behavior can include the following Amazon CloudFront configuration values: origin server name, viewer connection protocol, minimum expiration period, query string parameters, and trusted signers for private content.

Regional Edge Caches

You can use Amazon CloudFront to deliver content at improved performance for your viewers while reducing the load on your origin resources. Regional Edge Caches sit in between your origin web server and the global edge locations that serve traffic directly to your viewers. As the popularity of your objects is reduced, individual edge locations may evict those objects to make room for more popular content. Regional Edge Caches have larger cache width than any individual edge location, so objects remain in cache longer at these Regional Edge Caches. This helps keep more of your content closer to your viewers, reducing the need for CloudFront to go back to your origin web server and improves overall performance for viewers. For instance, all our edge locations in Europe now go to the Regional Edge Cache in Frankfurt to fetch an object before going back to your origin web server. Regional Edge Cache locations are currently utilized only for requests that need to go back to a custom origin; requests to Amazon S3 origins skip Regional Edge Cache locations.

You do not need to make any changes to your Amazon CloudFront distributions; this feature is enabled by default for all CloudFront distributions. There is no additional cost for using this feature.

Origin Servers

You can configure one or more origin servers for your Amazon CloudFront web distribution. Origin servers can be an AWS resource, such as Amazon S3, Amazon EC2, Elastic Load Balancing, or a custom origin server outside of AWS. Amazon CloudFront will request content from each origin server by matching the URLs requested by the viewer with rules that you configure for your distribution. This feature allows you the flexibility to use each AWS resource for what it’s designed for—Amazon S3 for storage, Amazon EC2 for compute, and so forth—without the need to create multiple distributions and manage multiple domain names on your website. You can also continue to use origin servers that you already have set up without the need to move data or re-deploy your application code. Furthermore, Amazon CloudFront allows the directory path as the origin name; that is, when you specify the origin for a CloudFront distribution, you can specify a directory path in addition to a domain name. This makes it easier for you to deliver different types of content via CloudFront without changing your origin infrastructure.

Private Content

You can use Amazon CloudFront’s private content feature to control who is able to access your content. This optional feature lets you use Amazon CloudFront to deliver valuable content that you prefer not to make publicly available by requiring your users to use a signed URL or have a signed HTTP cookie when requesting your content.

Device Detection

Amazon CloudFront edge locations can look at the value of the User Agent header to detect the device type of all the incoming requests. Amazon CloudFront can determine whether the end user request came from a desktop, tablet, smart TV, or mobile device and pass that information in the form of new HTTP headers to your origin server—Amazon EC2, Elastic Load Balancing, or your custom origin server. Your origin server can use the device type information to generate different versions of the content based on the new headers. Amazon CloudFront will also cache the different versions of the content at that edge location.

Geo Targeting

Amazon CloudFront can also detect the country from where the end users are accessing your content. Amazon CloudFront can then pass the information about the country in a new HTTP header to your custom origin server. Your origin server can generate different versions of the content for users in different countries and cache these different versions at the edge location to serve subsequent users visiting your website from the same country.

Cross-Origin Resource Sharing

Amazon CloudFront may be configured to forward the origin header value so that your origin server (Amazon S3 or a custom origin) can support cross-origin access via Cross-Origin Resource Sharing (CORS). CORS defines a way for client web applications that are loaded in one domain to interact with resources in a different domain.

Viewer Connection Protocol

Content can be delivered to viewers using either the HTTP or HTTPS protocol. By default, your web distribution will accept requests on either protocol. However, if you want all of your content or certain URLs delivered only over an HTTPS connection, you can configure your distribution to only accept requests that come over HTTPS for that content. You can configure this feature separately for each URL pattern in your web distribution as part of the cache behavior for that URL pattern.

Protocol Detection

You can configure Amazon CloudFront to include the protocol (HTTP vs. HTTPS) of your end user’s request as part of the cache key to identify an object uniquely in cache. This allows you to customize your content based on the protocol that your end users are using to access your content.

Custom SSL

Custom SSL certificate support lets you deliver content over HTTPS using your own domain name and your own SSL certificate. This gives visitors to your website the security benefits of Amazon CloudFront over an SSL connection that uses your own domain name in addition to lower latency and higher reliability. You can also configure CloudFront to use HTTPS connections for origin fetches so that your data is encrypted end-to-end from your origin to your end users. Configuring custom SSL certificate support is easy; you don’t need to learn any proprietary code or hire any consultants to configure it for you.

You can provision SSL/TLS certificates and associate them with Amazon CloudFront distributions within minutes. Simply provision a certificate using the new AWS Certificate Manager (ACM) and deploy it to your CloudFront distribution with a couple of clicks. Then let ACM manage certificate renewals for you. ACM allows you to provision, deploy, and manage the certificate with no additional charges.


Geo Restriction

Geo Restriction or Geoblocking lets you choose the countries in which you want to restrict access to your content. By configuring either a whitelist or a blacklist of countries, you can control delivery of your content through Amazon CloudFront only to countries where you have the license to distribute. To enable this feature, you can either use the Amazon CloudFront API or the Amazon CloudFront Management Console. When a viewer from a restricted country submits a request to download your content, Amazon CloudFront responds with an HTTP status code 403 (Forbidden). You can also configure Custom Error Pages to customize the response that Amazon CloudFront sends to your viewers.

TTL Settings: Min, Max, and Default TTL

Amazon CloudFront lets you configure a minimum time-to-live (Min TTL), a maximum TTL (Max TTL), and a default TTL to specify how long CloudFront caches your objects in edge locations. Amazon CloudFront uses the expiration period that your origin sets on your files (through Cache-Control headers) to determine whether CloudFront needs to check the origin for an updated version of the file. If you expect that your files will change frequently, you can configure your origin to set a short expiration period on your files. Amazon CloudFront accepts expiration periods as short as 0 seconds (in which case CloudFront will revalidate each viewer request with the origin). Amazon CloudFront also honors special Cache-Control directives such as private, no-store, and so on. These are often useful when delivering dynamic content that you don’t want CloudFront to cache.

If you have not set a Cache-Control header on your files, Amazon CloudFront uses the value of Default TTL to determine how long the file should be cached at the edge before Amazon CloudFront checks the origin for an updated version of the file. If you don’t want to rely on the Cache-Control headers set by your origin, you can now easily override the Cache-Control headers by setting the same value for Max TTL, Min TTL, and Default TTL. By setting both a Min TTL and a Max TTL, you can override origin misconfigurations that might cause objects to be cached for longer or shorter periods than you intend. Min TTL, Max TTL, and Default TTL values can be configured uniquely for each of the cache behaviors you define. This allows you to maximize the cache duration for different types of content on your site by setting a lower bound, upper bound, or a default value on the length of time each file can remain in cache.

Query String Parameters

Query string parameters are often used to return customized content generated by a script running on the origin server. By default, Amazon CloudFront does not forward query string parameters (for example, "?x=1&y=2") to the origin. In addition, the query string portion of the URL is ignored when identifying a unique object in the cache. However, you can optionally configure query strings to be forwarded to the origin servers and be included in the unique identity of the cached object. This feature can be enabled separately for each unique cache behavior that you configure. Query string parameters can thus help you customize your web pages for each viewer while still taking advantage of the performance and scale benefits offered by caching content at Amazon CloudFront edge locations.

GZIP

You can configure Amazon CloudFront to apply GZIP compression automatically when browsers and other clients request a compressed object with text and other compressible file formats. This means that if you are already using Amazon S3, CloudFront can transparently compress this type of content. For origins outside S3, doing compression at the edge means that you don’t need to use resources at your origin to do compression. The resulting smaller size of compressed objects makes downloads faster and reduces your CloudFront data transfer charges. To use the feature, simply specify within your cache behavior settings that you would like CloudFront to compress objects automatically and ensure that your client adds Accept-Encoding: gzip in the request header (most modern web browsers do this by default).

HTTP Cookie Support

Amazon CloudFront supports delivery of dynamic content that is customized or personalized using HTTP cookies. To use this feature, you specify whether you want Amazon CloudFront to forward some or all of your cookies to your custom origin server. You may also specify wildcard characters in the cookie name to forward multiple cookies matching a string format. Amazon CloudFront then considers the forwarded cookie values when identifying a unique object in its cache. This way, your end users get both the benefit of content that is personalized just for them with a cookie and the performance benefits of Amazon CloudFront.

Forward Headers to Origin

You can use Amazon CloudFront to forward all (or a whitelist of) request headers to your origin server. These headers contain information, such as the device used by your visitors or the country from which they accessed your content. You can configure CloudFront to cache your content based on the values in the headers, so that you can deliver customized content to your viewers. For example, if you are hosting multiple websites on the same web server, you can configure Amazon CloudFront to forward the Host header to your origin. When your origin returns different versions of the same object based on the values in the Host header, Amazon CloudFront will cache the objects separately based on those values.

Add or Modify Request Headers Forwarded from Amazon CloudFront to Origin

You can configure Amazon CloudFront to add custom headers or override the value of existing request headers when CloudFront forwards requests to your origin. You can use these headers to help validate that requests made to your origin were sent from CloudFront (shared secret) and configure your origin only to allow requests that contain the custom header values that you specify. This feature also helps with setting up Cross-Origin Request Sharing (CORS) for your CloudFront distribution: You can configure CloudFront always to add custom headers to your origin to accommodate viewers that don’t automatically include those headers in requests. It also allows you to disable varying on the origin header, which improves your cache hit ratio, yet forward the appropriate headers so that your origin can respond with the CORS header.

Enforce HTTPS-Only Connection Between Amazon CloudFront and Your Origin Web Server

You can configure Amazon CloudFront to connect to your origin server using HTTPS regardless of whether the viewer made the request by using HTTP or HTTPS.

Support for TLSv1.1 and TLSv1.2 Between Amazon CloudFront and Your Origin Web Server

Amazon CloudFront supports the TLSv1.1 and TLSv1.2 protocols for HTTPS connections between CloudFront and your custom origin web server (along with SSLv3 and TLSv1.0). You can choose the protocols that you want CloudFront to use when communicating with your origin so that you can, for example, choose not to allow CloudFront to communicate with your origin by using SSLv3, which is less secure than TLS.

Default Root Object

You can specify a default file (for example, index.html) that will be served for requests made for the root of your distribution without an object name specified, for instance, requests made to http://abc123.cloudfront.net/ alone, without a file name.

Object Versioning and Cache Invalidation

You have two options to update your files cached at the Amazon CloudFront edge locations. You can use object versioning to manage changes to your content. To implement object versioning, you create a unique file name in your origin server for each version of your file and use the file name corresponding to the correct version in your web pages or applications. With this technique, Amazon CloudFront caches the version of the object that you want without needing to wait for an object to expire before you can serve a newer version.

You can also remove copies of an object from all Amazon CloudFront edge locations at any time by calling the invalidation API. This feature removes the object from every Amazon CloudFront edge location regardless of the expiration period you set for that object on your origin server. If you need to remove multiple objects at once, you may send a list of invalidation paths (up to 3,000) in an XML document. Additionally, you can request up to 15 invalidation paths with a wildcard character. The invalidation feature is designed to be used in unexpected circumstances; for example, to correct an encoding error on a video you uploaded or an unanticipated update to your website’s CSS file. However, if you know beforehand that your files will change frequently, it is recommended that you use object versioning to manage updates to your files. This technique gives you more control over when your changes take effect, and it also lets you avoid potential charges for invalidating objects.

Access Logs

You can choose to receive more information about the traffic delivered or streamed by your Amazon CloudFront distribution by enabling access logs. Access logs are activity records that show you detailed information about each request made for your content. CloudFront access files are automatically delivered multiple times per hour and the logs in those files will typically be available within an hour of your viewers requesting that object.

Amazon CloudFront Usage Charts

Amazon CloudFront Usage Charts let you track trends in data transfer and requests (both HTTP and HTTPS) for each of your active CloudFront web distributions. These charts show your usage from each CloudFront region at daily or hourly granularity, going back up to 60 days. They also include totals, average, and peak usage during the time interval selected.

Amazon CloudFront Monitoring and Alarming Using Amazon CloudWatch

You can monitor, alarm, and receive notifications on the operational performance of your Amazon CloudFront distributions using Amazon CloudWatch, giving you more visibility into the overall health of your web application. CloudFront automatically publishes six operational metrics, each at 1-minute granularity, into Amazon CloudWatch. You can then use CloudWatch to set alarms on any abnormal patterns in your CloudFront traffic. These metrics appear in CloudWatch within a few minutes of the viewer’s request for each of your Amazon CloudFront web distributions.

Zone Apex Support

You can use Amazon CloudFront to deliver content from the root domain, or "zone apex" of your website. For example, you can configure both http://www.example.com and http://example.com to point at the same CloudFront distribution, without the performance penalty or availability risk of managing a redirect service.

Using Amazon CloudFront with AWS WAF to Protect Your Web Applications

AWS WAF is a web application firewall that helps detect and block malicious web requests targeted at your web applications. AWS WAF allows you to create rules based on IP addresses, HTTP headers, and custom URIs. Using these rules, AWS WAF can block, allow, or monitor (count) web requests for your web application.

HTTP Streaming of On-Demand Media

Amazon CloudFront can be used to deliver your on-demand adaptive bit-rate media content at scale to a global audience. Whether you want to stream your content using Microsoft Smooth Streaming format to Microsoft Silverlight players or stream to iOS devices using HTTP Live Streaming (HLS) format, you can do so using Amazon CloudFront without the need to set up and manage any third-party media servers. Furthermore, there are no additional charges for using this capability beyond Amazon CloudFront’s standard data transfer and request fees. Simply encode your media files for the format you want to use and upload it to the origin they plan to use.

On-demand Smooth Streaming You can specify in the cache behavior of an Amazon CloudFront web distribution to support Microsoft Smooth Streaming format for that origin.

On-demand HLS Streaming Streaming on-demand content using the HLS format is supported in Amazon CloudFront without having to do any additional configurations. You store your content in your origin (for example, Amazon S3). Amazon CloudFront delivers this content at a global scale to a player (such as the iOS player) requesting the HLS segments for playback.

RTMP Distributions for On-Demand Media Delivery

Amazon CloudFront lets you create RTMP distributions, which deliver content to end users in real time—the end users watch the bytes as they are delivered. Amazon CloudFront uses Adobe’s Flash Media Server 3.5 to power its RTMP distributions. RTMP distributions use the Real-Time Messaging Protocol (RTMP) and several of its variants, instead of the HTTP or HTTPS protocols used by other Amazon CloudFront distributions.

Content Delivery

Now that you have configured Amazon CloudFront to deliver your content, the following will happen when users request your objects:

  1. A user accesses your website or application and requests one or more objects.
  2. DNS routes the request to the Amazon CloudFront edge location that can best serve the user’s request, typically the nearest Amazon CloudFront edge location in terms of latency, and routes the request to that edge location.
  3. In the edge location, Amazon CloudFront checks its cache for the requested files. If the files are in the cache, Amazon CloudFront returns them to the user. If the files are not in the cache, CloudFront does the following:
    1. Amazon CloudFront compares the request with the specifications in your distribution and forwards the request for the files to the applicable origin server for the corresponding file type.
    2. The origin servers send the files back to the Amazon CloudFront edge location.
    3. As soon as the first byte arrives from the origin, Amazon CloudFront begins to forward the files to the user.
    4. Amazon CloudFront also adds the files to the cache in the edge location for the next time someone requests those files.

As mentioned earlier, you can configure headers to indicate how long you want the files to stay in the cache in Amazon CloudFront edge locations. By default, each object stays in an edge location for 24 hours before it expires. The minimum expiration time is 0 seconds; there isn’t a maximum expiration time limit.

The Amazon CloudFront console includes a variety of reports:

Amazon CloudFront cache statistics reports These reports use the Amazon CloudFront console to display a graphical representation of statistics related to CloudFront edge locations. Data for these statistics are drawn from the same source as CloudFront access logs. You can display charts for a specified date range in the last 60 days, with data points every hour or every day.

Amazon CloudFront popular objects reports These reports are available via the Amazon CloudFront console. You can display a list of the 50 most popular objects for a distribution during a specified date range in the previous 60 days.

Amazon CloudFront top referrers reports These reports provide a list of the 25 domains of the websites that originated the most HTTP and HTTPS requests for objects that CloudFront is distributing for a specified distribution. These top referrers can be search engines, other websites that link directly to your objects, or your own website.

Amazon CloudFront usage reports These are reports which are more detailed than the billing report but less detailed than CloudFront access logs. The usage report provides aggregate usage data by hour, day, or month, and it lists operations by region and usage type.

Amazon CloudFront viewers reports These reports show devices, browsers, and operating systems’ versions used to access your content, and also from what locations they are accessing your content.

Amazon CloudFront Management

You can configure Amazon CloudFront using the AWS Management Console, the Amazon CloudFront console, the AWS CLI, or various SDKs available for Amazon CloudFront.

Amazon CloudFront metrics are also accessible in the Amazon CloudWatch Console.

Amazon CloudFront Security

You can control user access to your private content in two ways:

  1. Restrict access to objects in the Amazon CloudFront edge cache using either signed URLs or signed cookies.
  2. Restrict access to objects in your Amazon S3 bucket so that users can access it through Amazon CloudFront, thus direct access to the S3 bucket. See Amazon S3 bucket policies in Chapter 6, “Storage Systems,” for more details.

Summary

We covered a lot of material in this chapter. While the exam questions may not go into as great a depth in terms of detail, understanding this material will assist you in providing the best answers to the questions on the exam.

In this chapter, we discussed the following:

  • Amazon VPC as a logically-isolated section of the AWS Cloud
  • AWS Direct Connect and how it allows you to establish a dedicated network connection from your premises to AWS
  • The two types of Elastic Load Balancers and their features (Application and Classic Load Balancers)
  • How Elastic Load Balancers automatically distribute incoming application traffic across multiple Amazon Elastic Compute Cloud (Amazon EC2) instances within an AWS Region
  • Virtual Private Network (VPN) connections and how you can connect Amazon VPC to remote networks to make your AWS infrastructure highly available
  • Internet gateways, which are used to connect your public subnets to the Internet
  • NAT gateways, which are used to provide connectivity to the Internet from your private instances
  • Elastic Network interfaces, which are used to multi-hone an Amazon EC2 instance and can be re-assigned to another Amazon EC2 instance
  • Elastic IP addresses (EIP), which are public IPv4 addresses that can be assigned and reassigned to Amazon EC2 instances. IPv6 is not (currently) supported with EIP.
  • You learned that Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service. We discussed the various routing types: Simple, Weighted Round Robin (WRR), Latency Based Routing (LBR), geolocation, and Failover routing.
  • You learned that Amazon CloudFront is a global Content Delivery Network (CDN) service that accelerates delivery of your websites, Application Programming Interfaces (APIs), video content, or other web assets.

Resources to Review

 

Exam Essentials

Understand what a VPC is. Know how to set up a VPC, and what are the minimum and maximum size of both a VPC and subnets.

Understand the purpose and use of route tables, network ACLs, and security groups. Know how to use each for controlling access and providing security.

Know what are the default values for route tables, network ACLs, and security groups. Know where those default values come from, how to modify them, and why you would modify them.

Understand the difference between a private and public subnet. Public subnets allow traffic to the Internet; private subnets do not. Know how to use Amazon EC2 instances in private subnets to have access the Internet.

Understand the role and function of the various ways to connect the VPC with outside resources. This includes Internet gateway, VPN gateway, Amazon S3 endpoint, VPC peering, NAT instances, and NAT gateways. Understand how to configure these services.

Understand what is an Elastic IP (EIP). Elastic supports public IPv4 addresses (as of this publication). Understand the difference between EIP and an ENI.

Understand what is an Elastic Network Interface (ENI). Elastic Network Interfaces can be assigned and reassigned to an Amazon EC2 instance. Understand why this is important.

Know what services operate within a VPC and what services operate outside a VPC. Amazon EC2 lives within a VPC. Services such as Amazon S3 live outside the VPC. Know the various ways to access these services.

Know what AWS Direct Connect is. Understand why it is used and the basic steps for setting it up. (Remember the seven steps listed in the AWS Direct Connect section of this chapter.)

Understand the concept of VIFs. Understand what a VIF is and the difference between a public and private VIF. Why would you use one versus the other? Understanding these concepts will be very helpful on the exam.

Understand the options for Elastic Load Balancing (Classic Load Balancer vs. Application Load Balancer). Know how each type of load balancer operates, why you would choose one over the other, and how to configure each.

Understand how health checks work in each type of load balancer. Classic Load Balancers and Application Load Balancers have different health check options. Know what they are!

Understand how listeners work. Understand rules, priorities, and conditions and how they interact.

Know how Amazon CloudWatch, AWS CloudTrail, and access logs work. Know what type of information each one provides.

Understand the role of security groups with load balancers. Be able to configure a security group and know how rules are applied.

Understand the various options for establishing an IPsec VPN tunnel from an Amazon VPC to a customer location. Know the operational and security implications of these options.

Know how Amazon Route 53 works as a DNS provider. Understand how it can be used for both public and private hosted zones.

Know what the different routing options are for Amazon Route 53. Understand how to configure the various routing options and how they work.

Know what an Amazon Route 53 routing policy is. Understand how it is applied in Amazon Route 53.

Understand what record types Amazon Route 53 supports and how they work. Know both standard and non-standard record sets.

Know the tools for managing and monitoring Amazon Route 53. Understand how Amazon CloudWatch and AWS CloudTrail work with Amazon Route 53. Go deep in understanding how all of the services in this chapter are monitored.

Know the purpose of Amazon CloudFront and how it works. Know what a distribution is and what an origin is. Know what types of files Amazon CloudFront can handle.

Know the steps to implement Amazon CloudFront. Remember there are three steps to do this.

Know the various methods for securing content in Amazon CloudFront. Know how to secure your content at the edge and at the origin.


Exercises

By now you should have set up an account in AWS. If you haven’t, now would be the time to do so. It is important to note that these exercises are in your AWS account and thus are not free.

Use the Free Tier when launching resources. The AWS Free Tier applies to participating services across the following AWS Regions: US East (Northern Virginia), US West (Oregon), US West (Northern California), Canada (Central), EU (London), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), and South America (Sao Paulo). For more information, see https://aws.amazon.com/s/dm/optimization/server-side-test/free-tier/free_np/.

Remember to delete and terminate resources to minimize usage charges.








Review Questions

  1. You have implemented a Classic Load Balancer. You now need to collect some information, specifically what is the IP address of the client making the request on the Classic Load Balancer. How would you collect this information?

    1. Enable CloudWatch and monitor the HostConnect metric.
    2. Use AWS CloudTrail and monitor the eventSource API.
    3. You would not be able to collect this information in a Classic Load Balancer—it is only available with Application Load Balancers.
    4. Use access logs.
  2. You just updated the port of the health check for an Application Load Balancer. You go into the CloudWatch console, but you are not seeing the health check. What could possibly be the reason?

    1. The CloudWatch console does not display information on a number of healthy hosts.
    2. You find this information in the Amazon EC2 Management console.
    3. You should review your security group’s rules to make sure that traffic is allowed to that port.
    4. You need to restart your Application Load Balancer after you make this change.
  3. You need to establish a highly available connection between your Amazon VPC and your datacenter. What is the best way to accomplish this?

    1. Create an AWS Direct Connect connection between your datacenter and your AWS VPC.
    2. Spin up multiple Amazon EC2 instances across two Availability Zones. Load VPN software onto the Amazon EC2 instances. Set internal routing such that if one Amazon EC2 instance fails the other takes over.
    3. Set up a Virtual Private Gateway with a route out to your datacenter.
    4. Set up a Virtual Private Gateway. Make sure that you have two customer gateways configured.
  4. You are using Amazon Route 53 as your DNS provider. You have a web application that is running in your datacenter located in Las Vegas, NV and in the AWS Frankfort, Germany Region. What steps would you take to minimize the load times for this web application?

    1. Implement a geolocation routing policy, where all requests from users in the United States are routed to your Las Vegas location, and everything in Europe is routed to your AWS service in the Frankfort region.
    2. Set up your web application in an AWS Region in the United States because you are not able to route traffic to non-AWS locations.
    3. Set up a simple routing policy that routes all European traffic to Frankfort and all United States traffic to Las Vegas.
    4. Set up a weighted routing policy, and split that traffic evenly between Frankfort and Las Vegas.
    5. Set up latency-based routing on your service.
  5. You are trying to SSH in to an Amazon EC2 instance in your Amazon VPC but are unable to do so. What should you be checking?

    1. Make sure that you have a Virtual Private Gateway attached to your VPC, that the VPC route table has an entry that routes packets to the Internet, and that the Network ACL has an inbound rule that allows traffic on port 80.
    2. Make sure that you have an Internet gateway attached to your VPC, that the VPC route table has an entry that routes packets to the Internet, and that the Network ACL has an inbound rule that allows SSH.
    3. Make sure that you have an Internet gateway attached to your Amazon VPC, that the VPC route table has an entry that routes packets to the Internet, that the Network ACL has an inbound and an outbound rule that allows SSH, and that the Amazon EC2 instance has a security group rule that allows inbound SSH.
    4. Make sure that you have an Internet gateway attached to your Amazon VPC, that the VPC route table has an entry that routes packets to the Internet, that the Network ACL has an inbound and an outbound rule that allows SSH, and that the Amazon EC2 instance has a security group rule that allows inbound SSH. Make sure that the EC2 instance has a Public or Elastic IP address associated with it.
  6. Why would you place an Amazon EC2 instance in a private subnet?

    1. To decrease the latency in reaching the instance
    2. Because you have more available IP addresses in a private subnet than you do in a public subnet
    3. As a way of providing an additional layer of security
    4. Because with some Amazon EC2 instances, you are obligated to place them in a private subnet
  7. You need to order an AWS Direct Connect circuit. What do you need on your side to implement AWS Direct Connect successfully?

    1. A router that supports BGP, with single mode fiber, and with a 1 or 10 Gig Ethernet port. You also need both a public and a private Autonomous System Number.
    2. A router that supports OSPF and has a 10 Gig Ethernet port. You also need a private Autonomous System Number.
    3. A switch that supports single mode fiber with a 1 or 10 Gig Ethernet port
    4. A router that supports BGP with single mode fiber and with a 1 or 10 Gig Ethernet port. You also need both a public and private Autonomous System Number. Finally, you need the ability to issue a LOA/CFA to AWS.
    5. A router that supports static routing with single mode fiber and that has a 1 or 10 Gig Ethernet port
  8. What would NOT be a reason to get AWS Direct Connect?

    1. Increased latency
    2. Decreased data transfer out costs
    3. Connecting VPCs in different regions
    4. Connectivity between your WAN and AWS
  9. What is a private VIF?

    1. The physical connection between AWS and the customer location
    2. The logical interface between the customer location and those AWS resources located inside the VPC
    3. The logical interface between the customer location and those AWS services located outside the VPC
    4. The logical connection between two VPCs when you establish VPC peering
  10. The network ACL shown here has been applied as an inbound rule to a subnet. What statement is correct?

    Rule # Type Protocol Port Range Source Allow / Deny
    50 SMTP (25) TCP (6) 25 0.0.0.0/0 DENY
    100 ALL Traffic ALL ALL 0.0.0.0/0 ALLOW
    * ALL Traffic ALL ALL 0.0.0.0/0 DENY
    1. All traffic is blocked in both directions.
    2. All traffic is allowed, except for SMTP, which is blocked inbound and outbound.
    3. All traffic is allowed inbound, except for any response to an SMTP packet.
    4. All traffic is allowed inbound, except SMTP, which is blocked within the subnet.
    5. All traffic is allowed inbound, except SMTP, which is blocked from entering the subnet.
  11. What statement is true about Internet gateways?

    1. For high availability, you should have one Internet gateway per Availability Zone.
    2. Internet gateways come with public IP addresses already assigned.
    3. You cannot have a VPC with both an Internet gateway and a Virtual Private Network gateway.
    4. An Internet gateway is needed if you want to connect to AWS services outside of the VPC.
  12. You have noticed that your web servers have come under a phishing attack. You have identified the IP address that is the source of this attack. What should you do to mitigate this attack?

    1. Configure a route table that directs packets from this IP address to a fictitious Amazon EC2 instance.
    2. Configure the Network ACLs to block traffic from this IP address.
    3. Configure the security group for your web servers to deny any protocols from this IP address.
    4. Contact the AWS Help Desk, and ask them to put a block on the offending subnet.
  13. You have established three VPCs all within the same region but in different accounts. What is the easiest way to establish connectivity between all three VPCs?

    1. Designate one VPC as the master, and establish VPN peering between the master VPC and each of the other VPCs.
    2. Establish an AWS Direct Connect connection among all of the VPCs.
    3. Establish a CloudHub with all three VPCs as participants.
    4. Establish VPC peering between each pair of VPCs.
    5. Install a Virtual Private Gateway (VPG) in each VPC, and establish an IPsec tunnel between each VPC using the AWS infrastructure.
  14. What is the difference between an Internet-facing load balancer and an internal-facing load balancer? (Choose two.)

    1. There is no difference between the two.
    2. Internet-facing load balancers are larger than internal load balancers.
    3. By default, Internet-facing load balancers get their DNS names from DHCP servers, while internal load balancers do not.
    4. The DNS name of an Internet-facing load balancer is publicly resolvable to the public IP addresses of the nodes.
    5. The DNS name of an internal load balancer is publicly resolvable to the private IP addresses of the nodes.
  15. What does a default VPC come with?

    1. A /20 address space
    2. Both an Internet gateway and a Virtual Private Network gateway
    3. A route table that sends all IPv4 traffic destined for the Internet to the Internet gateway
    4. A NAT instance
  16. You need to monitor all traffic from the Internet to Amazon EC2 instances in a VPC. What AWS tool do you have at your disposal?

    1. Amazon VPC Flow Logs
    2. Amazon CloudWatch
    3. AWS CloudTrail
    4. AWS Network Management Console
  17. Which statement is correct regarding Amazon CloudFront?

    1. Amazon CloudFront will forward a file to the user as soon as it gets the first bytes.
    2. Amazon CloudFront will wait until the entire file downloads in order to perform error checking before it forwards the file to the user.
    3. Amazon CloudFront always delivers the most current version of the file to the user.
    4. Amazon CloudFront is only located in AWS Regions.
  18. What are the best ways to control access to your content in the Amazon CloudFront edge locations? (Choose two.)

    1. Use Origin Access Identity (OAI).
    2. Signed URLs
    3. Signed cookies
    4. Policies that restrict access by IP address
  19. You have created a VPC with an Internet Gateway attached. The VPC is 10.155.0.0/16. You have created a subnet in that VPC. The Subnet is 10.155.1.0/23. What is the Route Table for the subnet?

    1. Destination Target Status Propagated
      10.0.0.0/16 local Active No
      10.155.0.0/23 local Active No
    2. Destination Target Status Propagated
      10.155.0.0/16 local Active No
    3. Destination Target Status Propagated
      10.0.0.0/16 local Active No
    4. There is no Route Table. When you create a subnet, you have to define a Route Table.
  20. When you configure Amazon CloudFront, an origin refers to which of the following?

    1. The AWS server that is holding your static content.
    2. Either an HTTP server or an Amazon S3 bucket.
    3. For static content only, it is an Amazon S3 bucket.
    4. For static content, it is either an HTTP server or an Amazon S3 bucket. For media files on demand it is an S3 bucket.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset