THE AWS CERTIFIED ADVANCED NETWORKING – SPECIALTY EXAM OBJECTIVES COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:
As you have seen throughout this guide, AWS provides many network services and features to help you build highly available, robust, scalable, and secure networks in the cloud. This chapter covers scenarios and reference architectures for combining many of these network components to meet common customer requirements. These scenarios include implementing network patterns that create hybrid networks and span multiple regions and locations. The exercises at the end of this chapter will help you design appropriate network architectures on AWS. Understanding how to architect networks to meet customer requirements is required to pass the exam, and we highly recommend that you complete the exercises in this chapter.
Imagine that you work for a company that is looking to expand a flagship application from a company data center onto AWS. The application has been successfully serving your customers in Europe, and you have been asked to extend application functionality quickly into the eu-central-1 region. Your application’s current design is depicted in Figure 16.1.
As you can see, the application implements a traditional “N-tier” architecture with web, application, and database tiers. All user data is stored in a relational database. Your initial task is to scale the web and application tiers to support increased demand for web and application server resources. As a result, you propose the network architecture depicted in Figure 16.2.
This design adds Amazon Route 53 to provide Domain Name System (DNS)-based routing between AWS and your existing on-premises resources. It hosts web and application tiers, with associated Elastic Load Balancing. Lastly, it provides back-end connectivity for the application to access data from its relational database.
For this network design, use of Amazon Route 53 Weighted Round Robin (WRR) routing and health checks are recommended to allow traffic to be dialed up and down based on what percentage of traffic you would like to send to AWS versus your on-premises resources. The use of other Amazon Route 53 routing options (for example, latency-based routing) are not recommended because they do not provide as much control over how much traffic will be sent to AWS versus your on-premises resources. This lack of control could lead to several undesirable scenarios, such as the following:
The design shown in Figure 16.2 retains application data on-premises, and therefore it requires careful back-end connectivity consideration. Many customers start with a Virtual Private Network (VPN) connection because VPN connections can often be set up more quickly than can AWS Direct Connect connections. VPN connections can be useful for experimenting with cloud bursting, as a bridge for establishing AWS Direct Connect connections, or when back-end connectivity bandwidth is relatively low and can tolerate Internet-influenced variable latency and jitter. AWS Direct Connect connections should be leveraged for high bandwidth needs, such as when multiple 10 Gbps connections are required or for being able to provide consistent network latency with minimal network jitter to your applications.
This design could also be augmented in a number of different ways, depending on application requirements, including:
For this next scenario, consider a company that is looking to implement multi-location resiliency for a flagship application. The application must be able to scale up and down gracefully based on user demand, and it must be capable of surviving the failure of multiple data centers, including the loss of an entire region. In the event of a multi-region disaster, the company still wants to be able to serve a static version of the website to users. To accomplish this goal, we will break down the requirements by regional, multi-regional, and disaster recovery components.
Figure 16.3 depicts a highly available regional design. Users are directed by Amazon Route 53 to an Application Load Balancer configured with web application firewall rules, cross-zone load balancing, connection draining, and instance health checks. This load balancer is responsible for applying security rules to user traffic while also distributing valid request load evenly across all healthy instances in multiple Availability Zones. It also integrates with a Multi-AZ Auto Scaling group to ensure that in-flight requests are handled gracefully before an Amazon Elastic Compute Cloud (Amazon EC2) instance is removed from the load balancer. This combination protects the application from Availability Zone outages, ensures that a minimal number of Amazon EC2 instances are running, and can respond to load changes by scaling each group’s Amazon EC2 instances up or down as needed.
Lastly, the Amazon EC2 instances are configured to connect to a Multi-AZ Amazon RDS database. Amazon RDS creates a master database and synchronously replicates all data to a slave database instance in another Availability Zone. Amazon RDS monitors the health of the master instance and will automatically fail over to the slave instance in the event of a failure.
Figure 16.4 expands this application’s network architecture to another region. In this example, the first region’s network infrastructure is replicated into a second region, including the application’s Virtual Private Cloud (VPC), subnets, Application Load Balancer and web application firewall rules, Amazon EC2 instances, and Auto Scaling configuration. Additionally, the Amazon Route 53-managed alias record for this domain is updated to include both load balancers with a health check and failover routing policy to reroute traffic from the primary region to a secondary region in the event of a regional failure. Additionally, the Amazon RDS configuration is updated to create an asynchronous read replica of the application’s database in the new region. In the event of a regional failure, the Amazon RDS read replica could be promoted to become the master database instance.
A variation of this design could include adding Amazon CloudFront and AWS WAF to manage centrally web application firewall rules for the application. Another variation could include creating an Amazon Route 53 latency-based routing policy instead of a failover policy. This approach would create an active-active environment that routes requests to the closest healthy load balancer based on minimizing network latency. This scenario requires tight coordination with the application team to ensure that additional database network connectivity requirements are met. Approaches for managing database connectivity include the following:
Figure 16.5 expands this architecture to include a final multi-region disaster recovery failover environment. In this example, two additional Amazon Route 53 aliases are created for the application. Users are directed to the application’s user-friendly domain name (such as www.domain.com), which is configured by Amazon Route 53 with a failover alias record pointing to the application’s production domain name (for example, prod.domain.com) as primary and the application’s static application domain name (such as static-app.domain.com) for failover.
The production domain name maintains the previous configuration, which includes records pointing to each regional Application Load Balancer and health checks. The static domain name is configured with a CName record pointing to an Amazon CloudFront distribution with an Amazon Simple Storage Service (Amazon S3) bucket origin hosting a static version of the application. In this scenario, the application’s user-friendly domain name will direct traffic to the application’s production load balancers as long as at least one of them is healthy. In the event that all resources across multiple Availability Zones and regions are unhealthy, Amazon Route 53 will direct users to an Amazon CloudFront distribution and Amazon S3 bucket in yet another region.
Additionally, Amazon CloudFront could be used to serve both static and dynamic content to your customers. Using Amazon CloudFront allows your content to be delivered to users from edge locations distributed across the world, can reduce the load on your back-end resources, and provides many additional benefits. More details are available in Chapter 7, “Amazon CloudFront.”
In this chapter, you learned about some additional scenarios where multiple AWS network services and features can be combined to build highly available, robust, scalable, and secure networks in the cloud to meet common customer requirements. These scenarios included creating hybrid networks to support application scaling to AWS and implementing highly robust applications that span multiple regions and locations.
For further learning, review the following URLs:
Understand the different types of Amazon Route 53 routing and know when you would use each one. Amazon Route 53 provides a number of different routing policies. These routing policies affect how network traffic is sent to your applications. Make sure that you understand the implications of each option so that you are able to map the most appropriate routing feature to different application requirements. Review Chapter 6, “Domain Name System and Load Balancing” for more information about Amazon Route 53 features.
Understand the different types of on-premises network connectivity requirements and know when you would use each one. AWS provides both VPN and AWS Direct Connect for connecting on-premises networks with AWS. Make sure that you are familiar with the implications of each option and can apply the appropriate solution to meet application connectivity requirements. Review Chapter 4, and Chapter 5, “AWS Direct Connect,” for details about each of these options.
Understand the health check capabilities for services such as Amazon Route 53 and Elastic Load Balancing. AWS provides many features for monitoring the health of your application. Make sure that you are familiar with not only these features, but also how they can be used together to provide end-to-end application health monitoring and dynamic routing around failed application components. Review Chapter 6 for more information about Amazon Route 53 features.
You should have performed the exercises in previous chapters for all of the services covered in this chapter. Take the time to go back and review previous chapters and their associated exercises to make sure that you are familiar with the implications of using each individual service or feature. The following exercises are designed to help you think about additional scenarios and determine how you would architect network connectivity solutions.
Which Amazon Route 53 routing policy would be the most appropriate for gradually migrating an application to AWS?
When connecting an on-premises network to AWS, which option reuses existing network equipment and Internet connections?
Which Amazon Route 53 routing policy would be the most appropriate for directing users to application resources that offer payment in their local currency?
Your current web application’s network security architecture includes an Application Load Balancer, locked down Security Groups, and restrictive VPC route tables. You have been asked to implement additional controls for temporarily blocking hundreds of noncontiguous, malicious IP addresses. Which AWS service or features should you add to this architecture?
A previous network administrator implemented a transit VPC architecture using Amazon EC2 instances with 10 GB networking to facilitate communication between multiple AWS VPCs in various regions and on-premises resources. Over time, the transit VPC Amazon EC2 instance network bandwidth has become saturated with on-premises traffic, causing application requests to fail. What design recommendations can you make to reduce application failures?
A previous network administrator implemented a transit VPC architecture to facilitate communication between multiple AWS networks and on-premises resources. Over time, the transit VPC Amazon EC2 instance network bandwidth has become saturated with cross-region traffic. What highly available design change should you recommend for this network?
You support an application that is hosted in ap-northeast-1 and eu-central-1. Users from around the word sometimes complain about long page-load times. Which Amazon Route 53 routing policy would provide the best user experience?
When connecting an on-premises network to AWS APIs, which option provides the least amount of network jitter and latency?
Which combination of Amazon Route 53 policies provide location-specific services with redundant, backup connections? (Choose two.)
What is a scalable way to provide Amazon EC2 instances in a private subnet with IPv4 egress access to the Internet with no need for network administration?
Your users have started to complain about poor application performance. You determine that your on-premises VPN connection is saturated with authentication and authorization traffic to the on-premises Microsoft Active Directory (AD) environment. Which option will reduce on-premises network traffic?