Chapter 1

Overview of AWS

This chapter covers the following subjects:

Overview of Cloud Computing: This chapter begins with an overview of cloud computing and defines what cloud computing is and why the cloud transformation is so important for anyone developing and deploying modern web-scale applications.

The Shared Responsibility Model: The cloud is an environment shared by the provider and the cloud consumers. You will learn about the shared responsibility model and analyze how it can affect your application development and what steps need to be taken to secure the development of an application.

AWS Services: The core part of the chapter takes a look at the services available in AWS and includes brief descriptions of the services, which are covered in more detail in the following chapters.

AWS Global Architecture: This section looks at the global architecture and how to design highly available applications in AWS.

Accessing AWS: This final part of this chapter looks at how to access AWS through the Management Console, the CLI, SDKs, and directly through the API.

This chapter covers content important to the following exam domains:

Domain 2: Security

  • 2.1 Make authenticated calls to AWS services.

Domain 3: Development with AWS Services

  • 3.2 Translate functional requirements into application design.

  • 3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI.

Domain 4: Refactoring

  • 4.1 Optimize applications to best use AWS services and features.

One of the key aspects of cloud computing is the ability to consume any kind of IT service on demand. The cloud also allows you to develop and deploy applications that are highly scalable, highly available, and highly resilient. This chapter introduces cloud computing and AWS and provides an overview of the AWS services and features.

Before tackling the question of why you should consider Amazon Web Services (AWS), it would be helpful to take a short look at the history of AWS. Amazon launched AWS in 2006, and it is widely recognized as the first true public cloud computing platform. At that point in time, Amazon had been in business more than 12 years and had grown from a simple online bookstore to a comprehensive e-commerce platform that was able to connect sellers with buyers. During the years growing the business from its bookstore roots to the largest e-commerce platform in the world, the teams running amazon.com had to overcome lots and lots of limitations of traditional environments and, in the process, had gained tremendous experience building highly scalable web application services.

Among the biggest challenges was the ability to provide a platform to users at any time and at any scale. To cover its day-to-day requirements, Amazon saw the need to invest in large amounts of hardware to cover peak performance requirements. These peak requirements were especially pronounced during the end-of-year sales, when resource consumption requirements grew several fold. Amazon needed to operate enough equipment to meet peak demand at the end of the year; at other times, a lot of that capacity went unused.

Initially, the idea was to sell the unused capacity to developers so that they could run their test environments. But with the constant expansion of amazon.com on to the global market, the amount of idle hardware grew as well. AWS was born from this initial idea and has now exceeded all expectations, giving developers the ability to deploy applications with the same availability and scale in mind as amazon.com.

So why should you choose cloud computing as your deployment model? And why AWS? The answer is that cloud computing—and AWS specifically, as the cloud computing market leader with arguably the biggest and most mature cloud computing platform—can outperform the traditional, on-premises datacenter-oriented approach to computing.

Using AWS provides you with advantages that are simply not available in the traditional approach:

  • Trading CapEx for OpEx: By consuming services through a pay-as-you-go model, you can run your complete IT stack from operating expenses and save capital expenditures for your core business rather than for IT.

  • No need to guess the required capacity: By using the flexibility and on-demand approach to deploying services, you can match your application resources to the user volume. The more users you have, the more infrastructure you can afford to run.

  • Increased speed and agility: Having the ability to deploy and test the environment in a very short time is crucial for achieving a short time to market for any application. The faster you can reach your audience, the better your chances for adoption.

  • The ability to go global in minutes: Having the ability to deploy your application on multiple locations around the globe in a matter of minutes can be a game changer for certain applications like social networking, chat, and video calling. The lower the latency to the user, the better the experience.

  • Setting your own price: AWS allows you to consume highly discounted units of computing. You are essentially allowed to bid on any unused resources and set your own price on computing resources. (However, if you get outbid, your resources are deleted.)

The cloud is a natural fit for development. You can use a combination of Agile methodologies, DevOps approaches, Continuous Integration/Continuous Deployment (CI/CD), PaaS, and cloud-native designs to deliver your applications in a highly reliable, highly efficient, and rapid manner. The ability to use the cloud can help any application quickly respond to any new business requirements, changing market conditions, and other factors that might influence the success of your application.

“Do I Know This Already?” Quiz

The “Do I Know This Already?” quiz allows you to assess whether you should read the entire chapter. Table 1-1 lists the major headings in this chapter and the “Do I Know This Already?” quiz questions covering the material in those headings so you can assess your knowledge of these specific areas. The answers to the “Do I Know This Already?” quiz appear in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Q&A Sections.”

Table 1-1 “Do I Know This Already?” Foundation Topics Section-to-Question Mapping

Foundations Topics Section

Questions

Overview of Cloud Computing

1, 8

The Shared Responsibility Model

2, 7

AWS Services

3, 6

AWS Global Architecture

4, 10

Accessing AWS

5, 9

Caution

The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the self-assessment. Giving yourself credit for an answer you correctly guess skews your self-assessment results and might provide you with a false sense of security.

1. Which of these is not a feature of cloud computing, as defined by the National Institute of Standards and Technology (NIST)?

  1. Rapid elasticity

  2. Self-service capability

  3. Application agility

  4. Broad network access

2. Which of the following is AWS responsible for?

  1. Updating the EC2 operating system

  2. Securing the hypervisor

  3. Encrypting RDS databases

  4. Configuring security group rules

3. Which of these services is designed with unlimited storage capacity?

  1. RDS

  2. S3

  3. EBS

  4. An EC2 instance store volume

4. When is an RDS database considered highly available?

  1. Always. An RDS database is inherently highly available.

  2. Only when an RDS database is replicated to another database on the same instance.

  3. Only when an RDS database has replicas in the same availability zone.

  4. Only when an RDS database has replicas in two availability zones.

5. When accessing AWS, which of these is true?

  1. All calls to AWS are API calls, regardless of the access method.

  2. Calls through the CLI are API calls, whereas calls through the Management Console and the SDK are direct invocations.

  3. Calls through the CLI and the SDKs are API calls, whereas calls through the Management Console are direct invocations.

  4. Calls through the SDK are API calls, whereas calls through the Management Console and the CLI are direct invocations.

6. Which of these services gives you the ability to run virtual machines in the cloud?

  1. VMM

  2. Systems Manager

  3. Lambda

  4. EC2

7. Which of these responsibilities are customers responsible for? (Choose two.)

  1. Creating and managing SSH keys

  2. Securing the hypervisor

  3. Enabling HTTPS on the Apache server

  4. Decommissioning storage devices

8. Which of the following most accurately describes IaaS in the cloud?

  1. Compute, networking, storage, and platforms in the datacenter

  2. Compute, networking, and storage solutions across a VPN

  3. Compute, networking, and storage solutions as a service

  4. Compute, networking, storage, and platforms as a service

9. You can access AWS services through the API by using the following credentials:

  1. An IAM-generated SSH secret key and public key

  2. An IAM-generated username and password

  3. An IAM-generated API secret key and API key

  4. An IAM-generated secret key and an access key ID

10. What is the smallest component of the AWS global architecture?

  1. A hypervisor

  2. A datacenter

  3. An availability zone

  4. A region

Foundation Topics

Overview of Cloud Computing

In today’s world, it is literally impossible to have a comprehensive conversation about any IT subject without mentioning “the cloud.” The reality of cloud adoption and cloud migrations is upon us, the applications that you build need to be cloud ready, deployed in the cloud, or even cloud native. But what exactly does the term cloud encompass, and how can you define the scope of “the cloud”?

Cloud deployments come in three types typically found in enterprises, and a fourth category is common in academic circles:

  • Public cloud: Any publicly available cloud service, such as AWS.

  • Private cloud: A cloud environment deployed on premises or with a service provider that is intended for use only within the organization and available only on the private network. Private clouds are typically used in enterprises that are required to adhere to certain regulations or laws.

  • Hybrid cloud: A cloud deployment used across both a private solution and a public solution. Hybrid is becoming a more popular option for many enterprises seeking to expand their capacities into the public cloud.

  • Community cloud: A hybrid deployment where members of a community share their resources to all members of the community. Community cloud deployments are mostly found in academic circles, government institutions, and open-source projects.

Although everyone agrees with the four deployment types, the exact definition of what cloud computing actually is varies depending on the source. For example, the standard features of a cloud computing environment in the National Institute of Standards and Technology (NIST) cloud computing definition are as follows:

  • On-demand self-service: Describes the ability for a consumer or an application that the consumer operates to provision resources at any time through a self-service portal or API.

  • Broad network access: Describes the ability for the consumer or an application that the consumer operates to access the resources and services available in the cloud from a broad network.

  • Resource pooling: Describes the characteristic of the cloud resources to be pooled into logical groups and isolated from other tenants or consumers of the cloud. For example, perhaps Company A should never be able to see any resources from Company B or even be aware of its existence and vice versa.

  • Rapid elasticity: Describes the capability of the cloud resources to be expanded or contracted at a moment’s notice. The cloud provider needs to ensure enough capacity so that all of its consumers can expand their application resource usage to any (reasonable) size at any moment. The cloud provider also needs to give the consumers the ability to shrink the resources in use by their application when not in use.

  • Measured service: Describes the characteristic of the cloud service that measures resource consumption and infrastructure performance and provides the data collected to the cloud provider and the consumer. That is essentially how you can monitor applications and how the provider can bill usage.

NIST defines three delivery models that indicate how a service is consumed and determine the level of interaction the user has with the underlying compute services:

  • IaaS: Infrastructure as a Service

  • PaaS: Platform as a Service

  • SaaS: Software as a Service

This chapter delves into the three service delivery models a bit later on. For now, take a look at Figure 1-1 to get a better idea of the NIST model of cloud computing.

images

Figure 1-1 Visual Model of Cloud Computing, as Defined by NIST

As you can see, the service delivery models can be sourced from any of the deployment types; they are considered cloud computing only when they provide and display the essential characteristics of a cloud computing service.

Basics of Cloud Computing

When looking at the broader picture, “cloud” is just a unified way of describing a computing system, solution, or application being delivered as a service. The big move to the cloud is thus largely connected to the ability to consume cloud resources as a service instead of having to purchase them outright. This model has been around for a long time in other industries. The automobile industry, for example, has several options that allow you to use a vehicle that vary in terms of flexibility:

  • Outright purchase: You can purchase a vehicle outright. This requires a capital expenditure and requires you to perform all the maintenance and purchase your own insurance. It is the least flexible option.

  • Leasing: You can lease a vehicle for a certain period. Instead of making a capital expenditure, you can use operating expenditures to acquire a vehicle. This might be a better option when your business plan cannot predict the use of the vehicle past a certain period. After the lease period expires, the vehicle is returned to the leasing company. Some lease plans also include the option to upgrade, cancel early, and even include insurance and maintenance plans.

  • Renting: When your needs are temporary or unpredictable, you can simply opt to rent a vehicle. Renting provides you with the most flexibility and allows you to adapt to varying business needs. For example, you might one day need a passenger vehicle and the next an 18-wheeler truck. When you rent, you can easily pick which service to use and can also avoid paying when you are not using a vehicle.

Cloud computing is basically a rental or lease agreement for compute services. You have the ability to either consume resources on demand at any time for any particular duration. On the other hand, with long-term, predictable workloads you have the ability to reserve some resources in the cloud but still take advantage of the pay-per-use model at an even lower price due to the reservation.

Don’t forget about the human factor and the benefits the cloud can deliver from the point of view of your workforce:

  • System operations teams benefit greatly from the cloud by being relieved of typical mundane repeating tasks such as manually deploying, updating, and upgrading the infrastructure.

  • Developers are empowered by the cloud by being able to directly interact with the infrastructure via an API and develop their applications much more efficiently.

  • Managers find that the impact of the cloud can be seen in better workforce productivity and more innovation since the employees are able to experiment and test new concepts at a fraction of the cost of traditional systems.

  • Business and application owners are better able to identify the total cost of ownership (TCO) and success of their applications as the costs can simply be represented as operating expenses.

There are plenty of other benefits, depending on the business driver of cloud adoption. That same business driver also determines what type of services you will be consuming from the cloud. You have several different options for consuming services, but all of them fall in the three major categories defined by NIST (see Figure 1-2):

  • IaaS

  • PaaS

  • SaaS

images

Figure 1-2 The Cloud Service Pyramid

IaaS, PaaS, and SaaS

At the lowest level of services that can be consumed from the cloud are infrastructure services. The IaaS service layer gives you the ability to consume resources such as virtual machine instances, block storage, virtual private network segments, and virtual network function devices such as routers, NAT, firewalls, and so on.

The IaaS environment has the capability to directly mirror the traditional on-premises environment and can help ease your transition from on premises to the cloud. However, due to the fact that IaaS provides raw units of consumption, the usage requires the broadest level of knowledge. Not only do you need to be able to operate the environment, you need to keep your operating systems and application secure, updated, and connected correctly. In addition, in most cases when consuming IaaS services, you are required to design and maintain your own high availability, manage backups, and manage the configurations.

Using IaaS can be very beneficial when your environment has strict policies for governance or compliance in place, when access to the operating system is required, or when you need to be able to control all aspects of your environment, from the flow of the traffic through the network to the storage of each bit and byte on the back end. While being able to control everything is an advantage, it’s also a major disadvantage of IaaS. An IaaS deployment can be subject to management overhead, can increase the complexity of the environment, and can make flexibility and high availability difficult to achieve. When the business driver for adoption of cloud computing is to reduce infrastructure maintenance and decrease complexity, you might instead choose PaaS.

Instead of providing raw compute capacity, platform offerings provide you with a complete and fully functioning platform. Examples of PaaS services are databases, queueing services, email services, analytics services, and ephemeral processing services. For example, when a team of developers charged with developing an application identifies a need for a database service to be introduced to store the content, they have two options:

  • Option 1: Use Infrastructure as a Service (IaaS):

    1. Deploy a virtual private network.

    2. Deploy a virtual machine (VM).

    3. Create the security rules required to access the VM in the cloud environment.

    4. Once the VM is running, update the operating system and install the database service.

    5. Create the firewall security rules required to access the VM in the operating system.

    6. Configure and tune the database service.

    7. Create a backup procedure for the database service.

    8. Deploy the database and export the connection string to the application.

    9. Manage the updates and upgrades of the server and the database application.

  • Option 2: Use Platform as a Service (PaaS):

    1. Deploy a virtual private network.

    2. Create the security rules required to access the VM in the cloud environment.

    3. Deploy the database service and export the connection string to the application.

As you can see, deploying a database using the PaaS approach requires much less provisioning than does the IaaS approach. The greatest benefit of PaaS services is that all steps can be done with simple API calls, and no interaction with an operating system or a configuration management tool is required to deploy the service. Examples of PaaS services that are available for consumption in the cloud include, but are not limited to, the following:

  • Database services

  • Caching services

  • Data ingestion and storage solutions

  • Serverless data processing solutions

  • Messaging solutions such as queuing, distribution, and delivery

  • Analytics, searching, and data transformation

  • Specialized mobile, IoT, machine learning, data science, and other applications

The deployment of fully functional services in the cloud also enables the cloud-native model for developing and deploying applications. The cloud-native approach dictates that any service being consumed from the cloud should be consumed in a way that is native to the cloud platform API. This means that the application is built for the cloud and can only be deployed in the cloud; usually with a cloud-native application, the application consumes cloud resources only when it requires them. In essence, a cloud-native application gives you the ability to never consume any resources while the application is idle.

The top layer of the service layer triad is Software as a Service (SaaS). Essentially, SaaS represents any application that is designed to be consumed by the users. The deployment and delivery mode of SaaS applications can vary, from web browser to mobile to IoT applications. Most SaaS applications are designed to be used directly by consumers and require very little or no IT knowledge as a prerequisite. Basically, with SaaS, you can simply log in to an application and start using it. Usually, the SaaS model provides very little or no access to the underlying features running the application as all levels of the infrastructure and the platform are managed by the SaaS provider.

Virtualization and Containers

Cloud computing is highly dependent on the ability to consume resources in isolated pools. But the underlying technology of computing still relies on physical units of compute. Whether those are physical servers, disk storage, or network devices, they need to be segregated into components that allow the user to consume the infrastructure at the smallest feasible component.

The engine that powers cloud computing is virtualization. Virtualization is the ability to slice up a piece of hardware into logical units that can be given to individual consumers and consumed independently while running on the same platform.

In computing, we typically reference virtual machines as our units of compute, but more and more of the cloud is now being powered by containers. Containers are essentially designed to offer the same capabilities as virtual machines but in an even smaller form factor.

A virtual machine consumes resources measured in virtual CPUs, hundreds of megabytes or multiple gigabytes of memory, and several gigabytes of storage; each virtual machine houses a complete operating system. The underlying hypervisor isolates the operating system from other tenants, and the operating system ensures that only the tenant has access to the virtual machine instance contents.

Containers are much leaner than VMs. Each container is a package that contains an application, its dependencies, and its configuration but does not require an operating system. Instead of running a separate operating system, a container has a container engine that isolates the containers from each other within an operating system and allows the underlying operating system to share all resources with the containers. The application thus only requires enough resources to run the application. Container resources are usually measured in shares of a virtual CPU, tens to hundreds of megabytes of memory (rarely a few gigabytes), and storage consumption measured in megabytes. Figure 1-3 compares virtual machines and containers.

images
images

Figure 1-3 Containers Versus VMs

Note

Containers are an increasingly important part of cloud computing. You can find a more comprehensive overview of containerization in AWS in Chapter 3, “Compute Services in AWS.”

The Shared Responsibility Model

Because AWS is a public cloud computing platform, the ownership of the infrastructure, the platforms, and the application layer is divided between the provider and the consumer. It is crucial that you are aware of the shared responsibility model as it can impact your model of deployment as well as the reliability of your application.

The shared responsibility model dictates that both the provider and the consumer must ensure that their parts of the environment are secured. You as the consumer share the responsibility of operating the platform with the provider, and whenever you deploy a certain type of service, it is on you to ensure that the service is configured with security, resiliency, and availability in mind.

The shared responsibility says that the provider is responsible for a bigger share of the overall responsibility with PaaS than with IaaS. The following examples demonstrate the differences in responsibility between IaaS and PaaS.

Example 1: Running a Database Virtual Machine Instance in the Cloud Using IaaS

The provider is responsible for

  • Securing the hardware in the datacenter

  • Securing the hypervisor

  • Securing the storage subsystems

  • Securing the physical network devices in the datacenter

  • Securing the uplink to the Internet and the uplinks between the datacenters

The consumer is responsible for

  • Securing the operating system user and network access (firewall, users, ports, key pairs, and so on)

  • Ensuring that the operating system and application are updated

  • Deploying and managing the database application

  • Securing the database application from unauthorized access

  • Securing the database content from unauthorized access

Example 2: Running a Database Deployed as PaaS

The provider is responsible for

  • Securing the hardware in the datacenter

  • Securing the hypervisor

  • Securing the storage subsystems

  • Securing the physical network devices in the datacenter

  • Securing the uplink to the Internet and the uplinks between the datacenters

  • Securing the operating system user and network access (firewall, users, ports, key pairs, and so on)

  • Ensuring that the operating system and application are updated

  • Deploying and managing the database application

  • Securing the database application from unauthorized access

The consumer is responsible for securing the database content from unauthorized access.

Figure 1-4 illustrates the shared responsibility model differences in the types of cloud services.

images
images

Figure 1-4 The Shared Responsibility Model and IaaS, PaaS, and SaaS

From Figure 1-4, you can see the areas of responsibility shared between the consumer in the dotted portions and the provider in the striped portions. In all three models, you as the consumer are responsible for managing user identity and access to the services to which you are subscribing. The IaaS model requires you to put in the highest amount of effort but is the most flexible, while the SaaS model is the easiest to use. Figure 1-4 shows some sharing of responsibility for the data in the SaaS model. This is due to the fact that some SaaS providers have the ability to feed data into your environment from different sources, and in such cases, the provider is also responsible for the security and validity of the data being ingested.

For developers, the ability to leverage ease of use with the power of flexibility is usually the most important aspect of cloud adoption. This is where PaaS comes in because it allows you to find the right mix of power and complexity by dramatically reducing the overall management footprint. This, combined with the ability to control the deployment and management of the resources directly through the API, enables developers to focus on building the application instead building and maintaining the infrastructure. The use of PaaS services can be a game changer for teams looking to run lean development processes.

AWS Services

images

AWS defines two service types: the Foundation services and the Platform services. Whereas the Platform services completely comply with the PaaS model, the Foundation services are an expansion of the basic IaaS, with some essential components that would usually fit better in the PaaS and SaaS cloud definitions.

Foundation Services

AWS Foundation services include all the IaaS services available in AWS and can be divided into several functional groups:

  • Network services

  • Compute services

  • Storage services

  • Security and identity services

  • End-user applications

Network Services

The network services allow your application’s components to interact with each other and also connect your application to the Internet and private networks. Examples of network services include the following:

  • Amazon Virtual Private Cloud (VPC): Allows you to connect your application with private network ranges, connect those private ranges with the Internet, and assign public IP addresses

  • AWS Direct Connect: A private optical fiber connection service that connects your on-premises sites with AWS

  • AWS Virtual Private Gateway: A component of VPC that provides the capability for establishing VPN connection with your on-premises sites

  • Amazon Route 53: The next-generation, API-addressable Domain Name Service (DNS) from AWS

  • Amazon CloudFront: The caching and Content Delivery Network (CDN) service in the AWS cloud

  • Amazon Elastic Load Balancing (ELB): Allows load balancing of traffic across AWS Elastic Compute Cloud (EC2) instances, AWS Elastic Container Service (ECS) containers, or other IP addressable targets

Figure 1-5 illustrates how different services allow you to connect to your application in different ways and from different locations. The network services in the preceding list also allow you to make your application highly available and deliver the content in a much more efficient manner.

images

Figure 1-5 AWS Network Services Providing Connectivity to Your Application

Compute Services

You have a lot of flexibility when it comes to compute services in AWS. The following are examples of compute offerings in AWS:

  • Amazon Elastic Cloud Computing (EC2): Provides the ability to deploy and operate virtual machines running Linux and Windows in the AWS cloud

  • Amazon Elastic Container Service (ECS): Provides the ability to deploy, orchestrate, and operate containers in the AWS cloud

  • Amazon Elastic Kubernetes Service (EKS): Provides the ability to deploy, orchestrate, and operate Kubernetes clusters in the AWS cloud

  • Amazon Lambda: Provides the ability to process simple functions in the AWS cloud

Figure 1-6 illustrates the evolution of applications from virtual machines to containerization and finally to functions. The services are designed to deliver the same functionality with increasing efficiency and reliability built in to the system as you move from VM-based applications to cloud service–based or cloud-native applications.

images

Figure 1-6 Evolution of AWS Compute Options

Storage Services

There are many types of data, and for each type you need to choose the right storage solution. In the AWS cloud, you have several different storage options, depending on the types of data you are storing. Here are a few examples:

  • Amazon Elastic Block Storage (EBS): EBS provides block-accessible, network attached, persistent storage for volumes that you can connect to EC2 instances and ECS containers.

  • Amazon Elastic File System (EFS): EFS provides a network attached file system that supports the Linux Network File System protocol (NFS) and allows you to share files among EC2 instances, ECS containers, and other services.

  • Amazon Simple Storage Service (S3): Designed to store unlimited amounts of data, S3 is the ultimate object storage system. All objects in S3 are accessible via standard HTTP methods.

  • Amazon Glacier: This archiving storage solution can be automatically integrated with S3.

  • AWS Storage Gateway: This hybrid storage solution exposes AWS as a storage back end to your on-premises servers.

  • AWS SnowBall and SnowMobile: These data transfer devices allow for physically moving data from on premises to the cloud at any scale.

Figure 1-7 illustrates different storage options and the purpose and cost associated with each storage type.

images

Figure 1-7 AWS Storage Options

Security and Identity Services

To provide a comprehensive approach to using the AWS environment in a secure manner, AWS provides security services, including the following:

  • Amazon Identity and Access Management (IAM): This is the standard user and access management service in AWS, which gives you the ability to control both user access to AWS and access to your application in one place.

  • Amazon Key Management Service (KMS): This service enables you to define a unified way to manage encryption keys for your AWS services and applications.

  • Amazon Cloud Hardware Security Module (CloudHSM): This cloud-enabled hardware security device extends the capabilities of KMS and allows for management of security from the root.

  • Amazon Inspector: Inspector provides an automatic assessment of your services running in AWS with a prioritized, actionable list for remediation.

  • Amazon Web Application Firewall (WAF): WAF protects web applications from attacks using exploits and security vulnerabilities.

End-User Applications

Within the scope of Fundamental services, AWS also bundles end-user applications. These include the ability to provide users with everything required to perform their work, including but not limited to the following:

  • Amazon WorkMail: This enterprise email and calendar service seamlessly integrates with almost any mail client.

  • Amazon WorkDocs: This document editor and collaboration service has its own extensible SDK against which you can develop applications for your workforce or your clients.

  • Amazon WorkSpaces: With this managed virtual desktop infrastructure (VDI) service, you can create Windows desktops and manage their domain membership, their application configuration, and the distribution of the desktops to the individuals within your organization.

As you can see, some of these services definitely do not fit within the standard IaaS model, and the name Foundation services is very fitting for this grouping. However, Foundation services are designed to provide most of the capabilities that AWS has to offer with Platform services, and they are also the basis for some of the Platform service solutions.

Platform Services

The AWS Platform services are essentially representations of pure PaaS services within the AWS cloud. The Platform services can be divided into the following groups:

  • Databases

  • Analytics tools

  • Application services

  • Developer tools

  • Specialized services for mobile, IoT, and machine learning

Databases

Several different database services are available from AWS. Like the storage services, the database services have been designed to fit different types of data and enable you to choose the correct database type to get the most out of the AWS platform. The following are examples of database services:

  • Amazon Relational Database Service (RDS): RDS is a fully managed relational database service for deploying and managing Amazon Aurora, PostgreSQL, MySQL, MariaDB, Oracle, and Microsoft SQL Server databases in AWS.

  • Amazon ElastiCache: This is a fully managed caching service for deployment of Redis or Memcached in-memory data stores in AWS.

  • Amazon DynamoDB: This is a fully managed nonrelational database service in AWS.

  • Amazon DynamoDB Accelerator (DAX): DAX is a fully managed caching service for DynamoDB.

  • Amazon RedShift: This is a fully managed data warehousing service for deployment of petabyte-scale data clusters at very low cost.

Figure 1-8 illustrates different database options and the purpose, data type, and scaling associated with each database type.

images

Figure 1-8 AWS Database Types and Features

Analytics Tools

Among the AWS analytics tools are services that help you ingest, transform, and digest data at any scale. The following are some of the most notable examples:

  • Amazon Kinesis: This fully managed set of services offers the ability to capture, process, and store streaming data at any scale.

  • Amazon Elastic Map Reduce (EMR): EMR provides the ability to run open-source Big Data extract–transform–load (ETL) workloads in the AWS cloud.

  • Amazon CloudSearch: This managed search service can be easily integrated into your applications.

Application Services

Several application services enable you to perform work and provide extensions to your applications that can make them more unified and scalable and that can allow you to offload more expensive components running in the cloud. Examples include the following:

  • Amazon API Gateway: This is a fully managed API management and deployment service.

  • Amazon Elastic Transcoder: This is a cost-effective and scalable fully managed media transcoding service.

  • Amazon Simple Workflow (SWF): SWF is designed to build business logic from your application directly in the cloud. It can help you orchestrate any workflow with a simple-to-use graphical interface that can automatically connect back-end resources and help you automate your business logic and data flows. AWS also supports manual steps for complex business operations that consist of both manual and automated processing.

Developer Tools

AWS provides a full set of tools that allow you to migrate your complete development to managed services in AWS. Among the most notable examples are the following:

  • AWS CodeCommit: This Git-compatible version control repository offers unlimited capacity and high availability at a very affordable price point.

  • AWS CodeBuild: This fully managed build service can help you automate builds and can integrate with most build tools commonly used in development.

  • AWS CodeDeploy: This fully managed deployment service can deploy your artifacts to a working environment.

  • AWS CodePipeline: This fully managed code workflow orchestration service can help automate your complete CI/CD pipeline.

  • AWS Cloud9: This browser-based IDE can help developers use any browser to develop, test, and deploy code in AWS.

  • AWS CodeStar: This fully managed CI/CD service can help you get started in minutes. It enables you to deploy a CI/CD pipeline from predesigned templates and add users to the project so you can start work fast.

Specialized Services for Mobile, IoT, and Machine Learning

AWS offers a very large set of services for accomplishing more specialized tasks, including the following:

  • AWS Pinpoint: This allows developers to easily engage users on their mobile devices with customized content.

  • AWS Device Farm: This tool enables you to test an application on devices in the Amazon cloud at scale before deploying it to production.

  • AWS Cognito: This centralized authentication service for mobile and web users can easily be federated with external directories through OpenId, OAuth, and SAML.

  • AWS Internet of Things (IoT) Services: This set of services is designed to provide everything required to run IoT, including the FreeRTOS operating system and components that help manage and work with IoT devices at any scale.

  • AWS SageMaker: SageMaker offers powerful tools that allow developers to design, build, and train machine learning models very quickly.

Management Services

On top of the Foundation and Platform services, several Management services in AWS allow you to develop, deploy, and monitor your applications as well as maintain compliance and adhere to any policies dictated by governance principles. The following are a few examples:

  • Amazon CloudWatch: This AWS cloud monitoring service allows for storing metrics and logs from any device running on AWS or on premises.

  • Amazon CloudTrail: This is an API call logging service. Every call in the AWS environment is an API call, and CloudTrail enables you to maintain a complete record of actions against your AWS infrastructure.

  • AWS Config: This configuration state recording service also has the ability to detect state changes and perform alerting.

  • AWS CloudFormation: CloudFormation provides the ability to implement an infrastructure as code approach when deploying your applications. This is the standard way to interact with the AWS services through a specification document.

  • AWS OpsWorks: This managed service for running Chef- and Puppet-compatible configuration management services in the AWS cloud.

images

AWS Global Architecture

One of the most important aspects of running services in AWS is the ability to make your application highly available and to serve the content to the users in a way that is compliant with any regulations governing data storage in various regions. The AWS infrastructure gives you the ability to select a location that is closest to your user base so that the latency is as low as possible and the data is stored according to the requirements of data sovereignty laws.

The AWS global architecture has four components:

  • Datacenters

  • Availability zones

  • Regions

  • Edge locations

Figure 1-9 shows the approximate locations of current AWS regions and the number of availability zones in each of the depicted regions. Planned AWS regions are depicted as empty circles.

images

Figure 1-9 AWS Global Infrastructure

Datacenters

The smallest piece of the AWS global infrastructure is a datacenter. One or more datacenters can be grouped together into an availability zone. AWS builds its own datacenters, and it also operates in third-party facilities. A typical AWS datacenter has the following characteristics:

  • Between 50,000 and 80,000 compute units per datacenter in approximately 500 to 1000 racks

  • Approximately 11 PB storage capacity per rack

  • Up to 100 Tbps connectivity on a proprietary redundant network Layer 2/Layer 3 and network security stack

Availability Zones

A datacenter or a group of datacenters are grouped together into an availability zone. An availability zone is a part of an AWS region that is designated as a fault isolation environment where a failure might affect all datacenters in the group. The availability zones are connected with low-latency private links and allow for single-digit millisecond latencies across all zones within a region.

You can deploy highly available applications with synchronous replication across multiple availability zones. For example, when deploying an RDS database instance, you can select the Multi-AZ option to create a secondary standby database that has a synchronous copy of the database stored in an availability zone different from the master.

Some services are automatically redundantly deployed across availability zones by default. For example, S3 data is automatically distributed across at least three availability zones to provide regional high availability.

You need to learn about each service and its characteristics so that you can design an application that is highly available and highly resilient to failure.

Regions

It is possible to deploy your application close to your users within the scope of a certain data sovereignty law by choosing regions. AWS offers a number of regions distributed around the globe to reach as much of the global audience as possible at the lowest possible latency. When choosing a region, you should always consider the following factors:

  • Data sovereignty: Are there laws that you need to conform to in a certain region, or can you store the data anywhere?

  • User proximity: How far from the users can you host the services?

  • Regional resilience: Regions can go down. Do you need to withstand a region outage?

  • Service availability: Is the service you are using available in the region? Not all services are available in all regions.

  • Regional pricing: Pricing differs in different regions as AWS charges you what it costs to run services in a certain region.

When regional redundancy is required, you need to consider that the data being replicated across regions will be traversing the Internet, which means there will be costs associated with data transfer out of AWS for any data flowing from one region to the other. When designing cross-region deployments, make sure to consider the following factors:

  • Synchronous replication across multiple regions will probably not be possible due to higher-than-single-digit millisecond latency between the locations.

  • Replication traffic counts against outgoing transfer costs.

  • Several managed services are designed to provide built-in replication support across multiple regions. Consider using those.

  • You need to determine how to recover and resynchronize after a region outage.

  • Client latency can be increased in case of failover to a distant region, which can negatively influence user experience and still breach the conditions of the application SLA.

Edge Locations

To further reduce the latency being experienced by your users, you can utilize the services running in the edge locations. Edge locations are close to urban centers that are not near regions; they make it possible to terminate connections and provide cached content to users with the least latency possible. On top of terminating connections and providing caching capability, edge locations also allow you to return dynamic responses to users by using the Lambda@Edge processing functions, which allow you to implement authentication, verification, and other features such as detecting user agents and putting up paywalls on your sites.

Figure 1-10 illustrates the approximate distribution of AWS edge locations around the globe.

images

Figure 1-10 AWS Edge Locations

To provide the lowest latency for DNS, all of the Route 53 servers are deployed in all edge locations around the globe. This vast distribution allows the Route 53 service to be highly resilient and allows AWS to promise a 100% SLA on the Route 53 service.

Accessing AWS

AWS provides several options for managing and configuring your resources. The environment is designed to be completely API addressable. All management calls to all services in AWS are API calls, which means both humans and machines can access AWS services. This section looks at the following ways to connect to the AWS management environment:

  • The AWS Management Console

  • The AWS command-line interface (CLI)

  • The AWS software development kits (SDKs)

  • The AWS Application Programming Interfaces (APIs)

Before you begin using AWS, you will of course need to create an AWS account. There are some general recommendations to consider when creating an AWS account. When you create an account, a “root user” is generated. This user is represented by the email address you used to register the account. As a general rule, you should ensure that this email address is somehow connected to more than one user in your organization so that the account can be recovered in case credentials are lost. Also, it is a good idea to ensure that the phone number entered for account verification is accessible to multiple users. You will need both the email address and the phone number to recover lost credentials for the root user.

images

Creating an AWS Account

To start creating an account, open the following link in your browser:

https://portal.aws.amazon.com/billing/signup

Here you need to enter the email address you chose to use with AWS, create a password, and optionally name your account so that you don’t need to remember the long account number that will be assigned to your account (see Figure 1-11). Note that the email address you provide is tied to the root user account.

images

Figure 1-11 AWS Signup Process, Step 1

Next, fill out the contact information form as shown in Figure 1-12.

images

Figure 1-12 AWS Signup Process, Step 2

Finally, in the page shown in Figure 1-13, enter your payment information and click Secure signup at the bottom of the page.

images

Figure 1-13 AWS Signup Process, Step 3

AWS Management Console

When the signup process is complete, you are presented with the AWS Management Console, which you can use to manage your account and also learn about AWS by looking at the tutorials and the documentation links. Your Management Console should look similar to the one in Figure 1-14.

images

Figure 1-14 AWS Management Console

On the top of the screen is a bar with several pull-down menus:

  • Services: This opens the Services screen, where you can find the service that you would like to manage.

  • Resource Groups: Here you can create resource groups. Using resource groups can be very beneficial in operations that span services across a region. The Resource Groups section allows you to create tags that will be used to identify the services belonging to various resource groups.

  • The small bell symbol: Click this to see alerts.

  • Your account name: Click this button to open a drop-down menu where you can perform the following actions if you have the appropriate permissions:

    • Manage your own account

    • Manage your organization’s account

    • Access the Billing Dashboard

    • Manage your security credentials

    • Switch role

    • Sign out

  • Region: Select the region where you would like to deploy your services.

  • Support: This pull-down menu provides access to the following:

    • Support Center, to open tickets with support

    • The AWS forums

    • AWS documentation

    • Training, including self-paced labs and online and classroom training

    • Other resources related to AWS

Below the top bar, the core of the Management Console is segmented into the following sections:

  • AWS services: Select and search for the same type of services you can find in the Services pull-down menu

  • Build a solution: This is a good place to start if you have a certain solution you want to build.

  • Learn to build: Tutorials in this section teach you how to deploy services.

  • Helpful tips: Get tips and ideas on how to get started quickly and navigate the AWS Management Console easily.

  • Explore AWS: Take a guided tour of AWS services.

As mentioned earlier in this section, the account email address is tied to the account root user. AWS advises against the use of the root user in any way or form unless there is an absolute need for it. It is also advisable to give the root user a strong password and enable multifactor authentication (MFA) on the root user to secure the account. AWS supports several free MFA applications that can be integrated with AWS accounts for two-factor authentication. Store the credentials and the MFA for the root user in a secure place, such as a safety deposit box, so that the credentials can be retrieved in case of an emergency. Once you have secured the root user, create an Identity and Access Management (IAM) user with administrative permissions and log out from the root user account.

Next, let’s take a look at the recommended steps of creating an IAM user, giving it permissions, and creating the secret key and access key ID credentials required to use the AWS CLI for the IAM user. In the AWS Management Console, search for IAM and click on IAM in the results (see Figure 1-15).

images

Figure 1-15 Searching for IAM in the AWS Management Console

In the Identity and Access Management (IAM) section of the Management Console, click Users in the menu on the left (see Figure 1-16) and then click on the Add user button.

images

Figure 1-16 Adding a User in the IAM Section of the Management Console

Next, create a username for your new user and select the check box next to the Programmatic access option under the access type, as shown in Figure 1-17. When you do this, AWS creates an access key ID and secret access key that you will be able to use with the AWS CLI and SDKs.

images

Figure 1-17 Adding a User by Creating a Username and Selecting the Access Type

Next, grant administrative access to this user by adding the user to the Administrators group, as shown in Figure 1-18. After you create this user, you will not need to (and should not) use the root user account for common administrative tasks.

images

Figure 1-18 Assigning Permissions to a User

In the next two dialogs, you can create a tag for the user and review your changes. The final dialog gives you your only opportunity to retrieve the credentials for the user; after you close this dialog, you cannot view the secret key or password anymore. However, in the final dialog, shown in Figure 1-19, you can click the Show link in the Secret access key column to display the secret key or download a .csv file with the credentials in plaintext. At this point, record both the access key ID and the secret access key, which you will need to configure the AWS console.

images

Figure 1-19 Recording the Access Key ID and Secret Access Key

Note

For a detailed description of the access key ID and secret key and how to manage user authentication and access, see Chapter 2, “Authentication, Identity, and Access Management.”

The Management Console is fairly comprehensive, but the real power of AWS is demonstrated when accessing the services in a programmable way through the CLI, the SDKs, or the API directly.

AWS CLI

The AWS command-line interface (CLI) is a powerful open-source tool designed to be used in day-to-day management operations from the Linux and Windows command-line environments. The CLI, which was designed using the AWS SDK for Python, can be integrated with bash scripting in Linux and PowerShell in Linux and Windows. This is a powerful way to use the CLI as an automation tool for your AWS infrastructure. The automation can be integrated into standard configuration management tools that enable you to perform bash or PowerShell commands. You can also manage your complete infrastructure from a single-instance operating system, a container, or even a Lambda function with a standard shell script approach.

You can store any shell scripts written for the CLI in your versioning repository, and you can give command-line inputs at the time of deployment in your CI server or your CD process so that the infrastructure components are deployed for the build process or before deploying the application from within the CI/CD toolchain. This means that you can treat the CLI input scripts as infrastructure as code documents, which gives you a powerful Swiss Army knife–like tool that can tackle any job.

The CLI can be a really valuable tool when performing testing and Proof of Concept (PoC) deployments in the AWS environment. It also allows you to quickly switch environments by switching profiles.

images
Installing the AWS CLI

To download and install the AWS CLI on your computer, you need to refer to the “Installing the AWS CLI” section of the AWS documentation that is available at https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html.

Using the AWS CLI

Once it is installed, the AWS CLI needs to be configured. To configure the AWS CLI, you need to have access to an AWS IAM access key ID and secret key. You will be using the access key ID and secret key created earlier in this chapter to authenticate to AWS.

To configure the AWS CLI, issue the aws configure command, which initiates the configuration process. In this process, you are prompted for the AWS access key ID and secret key, the default region in which you want to issue commands, and the default output format of your commands. In this example, you will be selecting the Ohio region by specifying the us-east-2 region code and the output of the commands as JSON so that the commands can be fed into other processes:

$ aws configure
AWS Access Key ID []: AKIA4TKFDVDJIIKERD5R
AWS Secret Access Key []: y3VYmHupCY9uDVUt8U
Default region name []: us-east-2
Default output format []: json

This command creates two files—config and credentials—in the .aws subdirectory of the home directory of the user who initiated the command.

You also have the ability to output in text or table format; the table format is the easiest to read, but it has a lot of formatting characters that you need to omit if you feed the output of an AWS command into any other command. The region and the output format can be overwritten by specifying the --region and --output command-line options.

If you check the contents of the config file, you should see output similar to this:

$ cat .aws/config
[default]
output = json
region = us-east-2

The contents of the credentials file should have your secret key and access key ID recorded:

$ cat .aws/credentials
[default]
aws_access_key_id = AKIA4TKFDVDJIIKERD5R
aws_secret_access_key = y3VYmHupCY9uDVUt8U

These credentials and this configuration belong to the default profile, as denoted by the profile marker with the value [default] on top of the file, and they will be used as the defaults. To add more profiles, you can use the --profile option in the CLI command. When you run aws configure --profile and specify a profile name, a new entry appears in the config and credentials files, with the profile name in the square brackets.

Note

Another way to store and access the configuration parameters is to simply declare them in the operating system environment variables. This is generally not considered to be the best practice since using the credentials and config files is a much more secure and permanent way of storing the credentials. To learn more about how to store the credentials in the environment variables, visit https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html.

After the AWS CLI has been configured, you can start using it. Before doing so, you need to understand the structure of the AWS CLI. The CLI has a very simple model that is easy to learn:

aws service operation options

To get a list of services that exist, you can simply run the aws help command. Then you can select a service that you would like to run and run the help command to get more information on the operations available for each service. For example, if you were to run the following help command on the s3api service, you would see the output shown in Figure 1-20:

$ aws s3api help
images

Figure 1-20 Output of the aws s3api help Command

As you can see, the command opens a man page for the s3api service. To demonstrate the functionality of the AWS CLI, next you will create an S3 bucket with a random name. You need to choose a random name because S3 buckets are globally unique. Scroll down in the help window to see the available commands. When you’re done, press q and Enter to exit.

To find out what options are required and supported when running the aws s3api create-bucket operation, you can run the help option as follows:

aws s3api create-bucket help

This command should display the man page for the create-bucket operation, as demonstrated in Figure 1-21.

images

Figure 1-21 Output of the aws s3api create-bucket help Command

The operation requires you to add the --bucket and --region options to specify the bucket name and the region you are addressing when creating the bucket:

aws s3api create-bucket --bucket 29378425-r4nd0m-bucket --region
us-east-1

The output should look similar to the following:

{
     "Location": "/29378425-r4nd0m-bucket"
}

The AWS CLI also has the ability to accept input JSON files for any operation. This is very useful if you would like to orchestrate the creation and management of your infrastructure through a process that uses predefined templates with certain definitions that you need to adhere to during any AWS management actions. To generate the template (called a skeleton file), you can use the --generate-cli-skeleton command. The following example shows how to create a skeleton file for creating an S3 bucket. The output can be stored to a file. To create the skeleton file, run the following command:

$ aws s3api create-bucket --generate-cli-skeleton

The output of the operation provides you with all the options that you can modify when creating the bucket:

{
            "ACL": "private",
            "Bucket": "",
            "CreateBucketConfiguration": {
            "LocationConstraint": "us-west-1"
            },
            "GrantFullControl": "",
            "GrantRead": "",
            "GrantReadACP": "",
            "GrantWrite": "",
            "GrantWriteACP": "",
            "ObjectLockEnabledForBucket": true
 }

Note

Upcoming chapters discuss the use of input files with the CLI in more detail.

AWS SDKs

In addition to the CLI, AWS also provides several software development kits (SDKs) for the most popular programming languages. These allow you to integrate your application with calls to the AWS APIs and manage your infrastructure straight from your application code. This capability really makes a difference when building an application because the code itself can talk directly to the AWS services and discover whether all underlying infrastructure requirements are met. If there are missing resources, the code can call up the AWS API and deploy services that it requires to run. As you can see, the SDKs empower developers to build applications that can self-manage, self-diagnose, and self-resolve issues stemming from the AWS infrastructure components. This functionality is especially valuable when building cloud-native applications. When you deploy your code to the code repository, the application can automatically trigger the build, the test, the deployment of the infrastructure, the deployment of the application, and any automated tests required before continuing to deploy the staging and finally production. The application can also monitor its own state by connecting to the CloudWatch service and automatically determine whether any addition or removal of resources is required.

SDKs are currently available for Java, JavaScript, .NET, Python, Node.js, Ruby, PHP, Go, and C++, and there is also an SDK for mobile and IoT applications. The SDKs are designed to empower developers with the ability to take complete control of both the application and the infrastructure the application consumes.

Accessing AWS Through APIs

All of the SDKs provide all the functionality required to connect to the APIs, including taking care of all the signatures, cryptography, retries, and error handling. Each SDK includes a client API method to call the API directly for each request. The client API call is designed to work in a 1:1 request/response manner with no server-side processing. The client API is simply a call over one object on AWS. Some of the more mature SDKs also include a resource API method when calling the AWS APIs. The resource API method enables you to perform operations on multiple objects in a much easier manner by processing the request on the resource side. When making calls to groups of instances or enumerating objects that match a certain filter, you need to issue requests with the client API to first enumerate the objects and then select the objects that you would like to interact with and then send a different client API call for each of the objects. Using the resource API is a much more efficient way of making calls when addressing more than one object of one AWS service because you can search and filter at the resource side, which gives you the ability to easily perform several operations with one call.

images

The following example lists objects and their sizes in an S3 bucket. To achieve this with the client API in Python, you need to first get a list of all the objects in the bucket, and once you have the list of all the keys, you need to again look up the sizes and respond with the size for each key. In Python, the following code performs this action through the client API:

import boto3
# List all the objects in the bucket:
client = boto3.client('s3')
response = client.list_objects(Bucket='29378425-r4nd0m-bucket')
# List all the objects in the bucket:
for content in response['Contents']:
    obj_dict = client.get_object(Bucket='29378425-r4nd0m-bucket',
Key=content['Key'])
# Print the name of the file (Key) and the size
    print(content['Key'], content['Size'])

You can achieve the same result much more simply by using the resource API with the following:

import boto3
# Using the resource API, connect to the S3 bucket
s3 = boto3.resource('s3')
bucket = s3.Bucket('29378425-r4nd0m-bucket')
# For each object in the bucket print the name and size
for obj in bucket.objects.all():
    print(obj.key, obj.size)

The benefit of the resource API is that it provides the ability to automatically paginate the responses; with the client API, in contrast, the developer needs to take care of the pagination.

All the AWS APIs are programming language dependent, and for each programming language supported by the SDKs, AWS has deployed one or more namespaces that can be used with that programming language. The AWS APIs are publicly documented, so if you want to perform a certain action, you can look up the documentation for your preferred programming language; see https://docs.aws.amazon.com/AWSEC2/latest/APIReference/making-api-requests.html.

Once you have selected the call you wish to make, you simply need to post the call to the appropriate address defined in the link above, and the AWS API should respond to your request with either a successful response or an error. Any 400 error indicates an issue in your request, and a 500 error denotes that the service is not able to serve the response at the moment. This could be due to the service not being available or the service call threshold being exceeded. No matter the cause, you handle 500 errors by retrying the request. It is always a good idea to handle any retries with an exponential back-off to avoid oversaturating the service with calls.

Summary

This chapter looked at the basics of cloud computing and defined the three layers that most cloud computing solutions fit into. You also learned about the structure of the AWS services and gained a basic understanding of the services available in the Fundamental, Platform, and Management groups. The chapter concluded with a look at the architecture of the AWS environment and the ways you can interact with the environment.

Exam Preparation Tasks

To prepare for the exam, use this section to review the topics covered and the key aspects that will allow you to gain the knowledge required to pass the exam. To gain the necessary knowledge, complete the exercises, examples, and questions in this section in combination with Chapter 9, “Final Preparation,” and the exam simulation questions in the Pearson Test Prep Software Online.

Review All Key Topics

Review the most important topics in this chapter, noted with the Key Topics icon in the outer margin of the page. Table 1-2 lists these key topics and the page number on which each is found.

Table 1-2 Key Topics for Chapter 1

Key Topic Element

Description

Page Number

Figure 1-3

Containers vs. VMs

11

Figure 1-4

Levels of shared responsibility in IaaS, PaaS, and SaaS

13

Section

AWS services overview

14

Section

AWS global architecture

20

Section

Creating an AWS account

23

Section

Installing and using the AWS CLI

29

Code example

Python example of the resource API versus the client API

33

Define Key Terms

Define the following key terms from this chapter and check your answers in the glossary:

Infrastructure as a Service (IaaS)

Platform as a Service (PaaS)

Software as a Service (SaaS)

shared responsibility model

availability zone

region

edge location

high availability

caching

storage

compute

networking

CLI

SDK

Q&A

The answers to these questions appear in Appendix A. For more practice with exam format questions, use the Pearson Test Prep Software Online.

1. Complete this sentence: A fault isolation environment that is composed from one or more datacenters in AWS is called a(n) _____________.

2. Complete this sentence: To replicate a corporate network, a cloud customer would use the _______ service model, as defined by NIST.

3. Complete this sentence: ________ is the next-generation DNS service available from AWS.

4. What is the most important security recommendation when opening an AWS account?

5. What type of access to AWS can you gain with a secret key and an access key ID?

6. What happens when you run the help command in the CLI?

7. What are the three components of the AWS CLI model?

8. Name two API method types are available in some SDKs.

9. What does the following command do?

Click here to view code image

aws ec2 describe-instances --profile developer01

10. What option can you use to override the default output of the AWS CLI?

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset