Chapter 3
Security and AWS Identity and Access Management (IAM)

THE AWS CERTIFIED SYSOPS ADMINISTRATOR - ASSOCIATE EXAM TOPICS COVERED IN THIS CHAPTER MAY INCLUDE, BUT ARE NOT LIMITED TO, THE FOLLOWING:

  • Domain 1.0: Monitoring and Metrics
  • images1.1 Demonstrate ability to monitor availability and performance
  • Domain 6.0: Security
  • images6.1 Implement and manage security policies
  • images6.2 Ensure data integrity and access controls when using the AWS platform
  • images6.3 Demonstrate understanding of the shared responsibility model
  • images6.4 Demonstrate ability to prepare for security assessment use of AWS
  • Content may include the following:
    • AWS platform compliance
    • AWS security attributes (customer workloads down to physical layer)
    • AWS administration and security services
    • AWS Identity and Access Management (IAM)
    • Amazon Virtual Private Cloud (Amazon VPC)
    • AWS CloudTrail
    • Amazon CloudWatch
    • AWS Config
    • Amazon Inspector
    • Ingress vs. egress filtering and which AWS Cloud services and features fit
    • Core Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3) security feature sets
    • Incorporating common conventional security products (firewall, Virtual Private Network [VPN])
    • Distributed Denial of Service (DDoS) mitigation
    • Encryption solutions (e.g., key services)
    • Complex access controls (e.g., sophisticated security groups, Access Control Lists [ACLs])

images

Security on AWS

AWS delivers a scalable cloud computing platform with high availability and dependability that provides the tools to enable you to run a wide range of applications. These tools assist you in protecting the confidentiality, integrity, and availability of your systems and data.

The AWS Certified SysOps Administrator – Associate exam focuses on how to use the AWS tool set to secure your account and your environment. The Security domain is 15 percent of this exam!

Shared Responsibility Model

Before we go into the details of how AWS secures its resources, we talk about how security in the cloud is different than security in your on-premises datacenters. When you move computer systems and data to the cloud, security responsibilities become shared between you and your Cloud Services Provider (CSP). In this case, AWS is responsible for securing the underlying infrastructure that supports the cloud, and you’re responsible for anything that you put on the cloud or connect to the cloud. This shared responsibility model can reduce your operational burden in many ways, and in some cases, it may even improve your default security posture without any additional action on your part.

The amount of security configuration work you have to do varies depending on which services you select and how you evaluate the sensitivity of your data. However, there are certain security features—such as individual user accounts and credentials, Secure Sockets Layer (SSL)/Transport Layer Security (TLS) for data transmissions to encrypt data in transit, encryption of data at rest, and user activity logging—that you should configure no matter which AWS service you use.

AWS Security Responsibilities

AWS is responsible for protecting the global infrastructure that runs all of the services offered in the AWS Cloud. This infrastructure is composed of the hardware, software, networking, and facilities that run AWS Cloud services. Protecting this infrastructure is AWS number one priority. Although you can’t visit our datacenters or offices to see this protection firsthand, we provide several reports from third-party auditors, which have verified our compliance with a variety of relevant computer security standards and regulations.


Customer Security Responsibilities

With the AWS Cloud, you can provision virtual servers, storage, databases, and desktops in minutes instead of weeks. You can also use cloud-based analytics and workflow tools to process your data as you need it, and then store it in your own datacenters or in the cloud. Which AWS Cloud services you use determines how much configuration work you have to perform as part of your security responsibilities. For example, for Amazon Elastic Compute Cloud (Amazon EC2) instances, you’re responsible for management of the guest operating system (including updates and security patches), any application software or utilities you install on the instances, and the configuration of the AWS-provided firewall (called a security group) on each instance. These are basically the same security tasks that you’re used to performing no matter where your servers are located. AWS managed services like Amazon RDS or Amazon Redshift provide all of the resources you need in order to perform a specific task, but without the configuration work that can come with them. With managed services, you don’t have to worry about launching and maintaining instances, patching the guest operating system or database, or replicating databases—AWS handles that for you. But as with all services, you should protect your AWS account credentials, and set up individual user accounts with AWS Identity and Access Management (IAM) so that each of your users has her own credentials, and you can implement segregation of duties. You should consider using Multi-Factor Authentication (MFA) with each account, requiring the use of SSL/TLS to communicate with your AWS resources, and setting up Application Programming Interface (API) and user activity logging with AWS CloudTrail. Figure 3.1 demonstrates the shared responsibility model.

Image shows shared responsibility model having categories customer (data, access management, OS and firewall configuration, data encryption, and protection) and AWS (compute, storage, database, and networking: AWS global infrastructure).

FIGURE 3.1 Shared responsibility model

AWS Global Infrastructure Security

AWS operates the global cloud infrastructure that you use to provision a variety of basic computing resources, such as processing and storage. The AWS global infrastructure includes the facilities, network, hardware, and operational software (for example, host operating system, virtualization software) that support the provisioning and use of these resources. The AWS global infrastructure is designed and managed according to security best practices as well as a variety of security compliance standards. As a systems operator, you can be assured that you’re building web architectures on top of some of the most secure computing infrastructure in the world. See Figure 3.2 for a depiction of AWS global infrastructure.

Map shows AWS regions and availability zones (in April 2017). Identified places are in North America (2, 3, and 5), South America (3), India, Asia (2 and 3), Europe (2 and 3), and Australia (3).

FIGURE 3.2 Amazon Web Services Regions and Availability Zones (as of April 2017)

Physical and Environmental Security

AWS datacenters are state of the art, using innovative architectural and engineering approaches. Amazon has many years of experience in designing, constructing, and operating large-scale datacenters. This experience has been applied to the AWS platform and infrastructure. AWS datacenters are housed in nondescript facilities. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff using video surveillance, intrusion detection systems, and other electronic means. Authorized staff must pass two-factor authentication a minimum of two times to access datacenter floors. All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff.

AWS provides datacenter access and information only to employees and contractors who have a legitimate business need for such privileges. When an employee no longer has a business need for these privileges, his access is immediately revoked, even if he continues to be an employee of Amazon or AWS. All physical access to datacenters by AWS employees is logged and audited routinely.

Fire Detection and Suppression

AWS datacenters have automatic fire detection and suppression equipment to reduce risk. The fire detection system uses smoke detection sensors in all datacenter environments, mechanical and electrical infrastructure spaces, chiller rooms, and generator equipment rooms. These areas are protected by wet-pipe, double-interlocked pre-action or gaseous sprinkler systems.

Power

AWS datacenter electrical power systems are designed to be fully redundant and maintainable without impact to operations. Uninterruptible Power Supply (UPS) units provide backup power in the event of an electrical failure for critical and essential loads in the facility. AWS datacenters use generators to provide backup power for the entire facility.

Climate and Temperature

Climate control is required to maintain a constant operating temperature for servers and other hardware, which prevents overheating and reduces the possibility of service outages. AWS datacenters are built to maintain atmospheric conditions at optimal levels. Personnel and systems monitor and control temperature and humidity at appropriate levels.

Management

AWS monitors electrical, mechanical, and HVAC systems and equipment so that any issues are immediately identified. AWS staff performs preventative maintenance to maintain the continued operability of equipment.

Storage Device Decommissioning

When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals.

Business Continuity Management

Amazon’s infrastructure has a high level of availability and provides customers with the features to deploy a resilient IT architecture. AWS has designed its systems to tolerate system or hardware failures with minimal customer impact. Datacenter business continuity management at AWS is under the direction of the Amazon Infrastructure Group.

Availability

Datacenters are built in clusters in various global regions. All datacenters are online and serving customers; no datacenter is “cold.” In case of failure, automated processes move data traffic away from the affected area. Core applications are deployed in an N+1 configuration so that, in the event of a datacenter failure, there is sufficient capacity to enable traffic to be load-balanced to the remaining sites.

AWS provides its customers with the flexibility to place instances and store data in multiple geographic regions and also across multiple Availability Zones in each region. Each Availability Zone is designed as an independent failure zone. This means that Availability Zones are physically separated in a typical metropolitan region and are located in lower-risk flood plains (specific flood zone categorization varies by region). In addition to having discrete UPS and on-site backup generation facilities, they are each fed via different grids from independent utilities to reduce single points of failure further. Availability Zones are all redundantly connected to multiple tier-1 transit providers.


Incident Response

The Amazon Incident Management Team employs industry-standard diagnostic procedures to drive resolution during business-impacting events. Staff operators provide 24 hours a day, 7 days a week coverage to detect incidents and to manage the impact and resolution.

Communication

AWS has implemented various methods of internal communication at a global level to help employees understand their individual roles and responsibilities and to communicate significant events in a timely manner. These methods include orientation and training programs for newly hired employees, regular management meetings for updates on business performance and other matters, and electronic means such as video conferencing, electronic mail messages, and the posting of information via the Amazon intranet.

AWS has also implemented various methods of external communication to support its customer base and the community. Mechanisms are in place to allow the Customer Support Team to be notified of operational issues that impact the customer experience. An AWS Service Health Dashboard is available and maintained by the Customer Support Team to alert customers to any issues that may be of broad impact. The AWS Security Center is available to provide customers with security and compliance details about AWS. Customers can also subscribe to AWS Support offerings that include direct communication with the Customer Support Team and proactive alerts to any customer-impacting issues.

Network Security

The AWS network has been architected to permit you to select the level of security and resiliency appropriate for your workload. To enable you to build geographically dispersed, fault-tolerant web architectures with cloud resources, AWS has implemented a world-class network infrastructure that is carefully monitored and managed.

Secure Network Architecture

Network devices, including firewall and other boundary devices, are in place to monitor and control communications at the external boundary of the network and at key internal boundaries in the network. These boundary devices employ rule sets, Access Control Lists (ACLs), and configurations to enforce the flow of information to specific information system services.

ACLs, or traffic flow policies, are established on each managed interface to manage and enforce the flow of traffic. ACL policies are approved by Amazon Information Security. These policies are automatically pushed to ensure that these managed interfaces enforce the most up-to-date ACLs.

Secure Access Points

AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic. These customer access points are called API endpoints, and they permit secure HTTP access (HTTPS), which allows you to establish a secure communication session with your storage or compute instances within AWS. To support customers with Federal Information Processing Standard (FIPS) cryptographic requirements, the SSL-terminating load balancers in AWS GovCloud (US) are FIPS 140-2 compliant.

In addition, AWS has implemented network devices that are dedicated to managing interfacing communications with Internet Service Providers (ISPs). AWS employs a redundant connection to more than one communication service at each Internet-facing edge of the AWS network. These connections each have dedicated network devices.

Transmission Protection

You can connect to an AWS access point via HTTP or HTTPS using SSL, a cryptographic protocol that is designed to protect against eavesdropping, tampering, and message forgery. For customers who require additional layers of network security, AWS offers the Amazon Virtual Private Cloud (Amazon VPC) (as referenced in Chapter 5, “Networking”), which provides a private subnet within the AWS Cloud and the ability to use an IPsec Virtual Private Network (VPN) device to provide an encrypted tunnel between the Amazon VPC and your datacenter.

Network Monitoring and Protection

The AWS network provides significant protection against traditional network security issues, and you can implement further protection. The following are a few examples of the network monitoring and protection services and features that AWS offers.

Distributed Denial of Service (DDoS) attacks AWS API endpoints are hosted on large, Internet-scale, world-class infrastructure that benefits from the same engineering expertise that has built Amazon into the world’s largest online retailer. Proprietary Distributed Denial of Service (DDoS) mitigation techniques are used. Additionally, AWS networks are multi-homed across a number of providers to achieve Internet access diversity.

Man-in-the-Middle (MITM) attacks All of the AWS APIs are available via SSL-protected endpoints that provide server authentication. Amazon EC2 Amazon Machine Images (AMIs) automatically generate new Secure Shell (SSH) host certificates on first boot and log them to the instance’s console. You can then use the secure APIs to call the console and access the host certificates before logging into the instance for the first time. AWS encourages you to use SSL for all of your interactions.

IP spoofing Amazon EC2 instances cannot send spoofed network traffic. The AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or Machine Access Control (MAC) address other than its own.

Port scanning Unauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy. Violations of the AWS Acceptable Use Policy are taken seriously, and every reported violation is investigated. Customers can report suspected abuse via the contacts available on the AWS website. When unauthorized port scanning is detected by AWS, it is stopped and blocked. Port scans of Amazon EC2 instances are generally ineffective because, by default, all inbound ports on Amazon EC2 instances are closed and are only opened by the customer. Strict management of security groups can further mitigate the threat of port scans. If you configure the security group to allow traffic from any source to a specific port, then that specific port will be vulnerable to a port scan. In these cases, you must use appropriate security measures to protect listening services that may be essential to their application from being discovered by an unauthorized port scan. For example, a web server must clearly have port 80 (HTTP) open to the world, and the administrator of this server is responsible for the security of the HTTP server software, such as Apache®. You may request permission to conduct vulnerability scans as required to meet your specific compliance requirements. These scans must be limited to your own instances and must not violate the AWS Acceptable Use Policy. Advance approval for these types of scans can be requested by submitting a request via the AWS website.

Packet sniffing by other tenants Although you can place your interfaces into promiscuous mode, the Hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other’s traffic. Although Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another customer’s data, as a standard practice, you should encrypt sensitive traffic.



AWS Compliance Program

AWS Compliance enables you to understand the robust controls in place at AWS to maintain security and data protection in the cloud. As you build your systems on top of the AWS Cloud infrastructure, compliance responsibilities will be shared. By tying together governance-focused, audit-friendly service features with applicable compliance or audit standards, AWS Compliance enablers build on traditional programs and help customers operate in a secure and controlled environment. The IT infrastructure that AWS provides is designed and managed in alignment with security best practices and a variety of IT security standards, including, but not limited to the following:

  • Service Organization Controls (SOC) 1/Statements on Standards for Attestation Engagements (SSAE) 16/International Standard on Assurance Engagements (ISAE) 3402 (formerly Statement on Auditing Standards [SAS] 70), SOC 2, and SOC 3
  • Federal Information Security Management Act (FISMA)
  • Federal Risk and Authorization Management Program (FedRAMP)
  • Department of Defense (DoD) Security Requirements Guide (SRG) Levels 2 and 4
  • Payment Card Industry Data Security Standard (PCI DSS) Level 1
  • International Organization for Standardization (ISO) 9001, ISO 27001, ISO 27017, and ISO 27018
  • International Traffic in Arms Regulations (ITAR)
  • FIPS 140-2
  • Singapore Multi-Tier Cloud Security Standard (MTCS) Level 3
  • Germany Cloud Computing Compliance Controls Catalog (C5)
  • United Kingdom Cyber Essentials Plus
  • Australia Information Security Registered Assessors Program (IRAP)

In addition, the flexibility and control that the AWS platform provides allow you to deploy solutions that meet several industry-specific standards, including:

  • Criminal Justice Information Services (CJIS)
  • Cloud Security Alliance (CSA)
  • Family Educational Rights and Privacy Act (FERPA)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • Motion Picture Association of America (MPAA)


Now that we have discussed the shared responsibility model, let’s move on to IAM and how to secure your AWS account.

Securing Your AWS Account with AWS Identity and Access Management (IAM)

AWS Identity and Access Management (IAM) provides centralized management of access and authentication of users to the services in an AWS account. The service provides the mechanisms to identify who has access to an AWS account and to control what they can do with the AWS Cloud services in that AWS account. The IAM service is provided at no additional charge.

User, group, and role entities are created in IAM and have policies (JSON policy documents) applied to them to control their access to AWS resources. IAM allows your policies to define how resources can be accessed (for example, launching and terminating Amazon EC2 instances) and what resources can be accessed (such as Amazon S3 buckets and their contents). An IAM best practice is to use the principle of least privilege when setting the policies that control access to AWS resources. IAM allows appropriate credentials to be defined for users.

IAM is also integrated with AWS Marketplace so that you can control who in your organization can subscribe to the software and services offered in AWS Marketplace. Because subscribing to certain software in AWS Marketplace launches an Amazon EC2 instance to run the software, this is an important access control feature. Using IAM to control access to AWS Marketplace also enables AWS account owners to have fine-grained control over usage and software costs.

In this section, we cover IAM users, IAM groups, IAM roles, and IAM policies.

IAM User

An IAM user is an entity created in an AWS account that provides a way to interact with the resources in the account. A user can be any individual, system, or application that interacts with AWS resources, either programmatically (using AWS Software Development Kits [SDKs]), through the AWS Management Console, or through the AWS Command Line Interface (AWS CLI). Each user has a unique name within the AWS account and a unique set of security credentials not shared with other users. IAM eliminates the need to share passwords or keys and enables you to minimize the use of your AWS account credentials.

When an AWS account is created, the credentials used at account creation become the root user account ID (for example, the root account user name is the email address specified at account creation). It is important to know that the actions of the root user cannot be restricted. Any entity—a user or application—that has the root user account credentials can undertake any activity in an AWS account. A best practice is not to use the root user account credentials and instead create a separate IAM user with all administrative privileges. Use that IAM user to apply the principle of least privilege by creating additional individual IAM accounts for your various users.

IAM Credentials

There are three ways to use AWS Cloud services and access resources: via the AWS Management Console using a web browser, the AWS CLI, and the AWS SDKs through API calls. These three access mechanisms require credentials.

The AWS Management Console requires either the root user email address and a password, or an IAM user name and a password. A best practice is not to use the root user email address for access to the AWS account. The AWS CLI and AWS SDKs require an access key ID and a secret access key. Create credentials only as needed to maintain the principle of least privilege. See Table 3.1 for descriptions of IAM credentials.

TABLE 3.1 IAM Credentials

Types of Security Credentials Description
Email address and password Associated with the AWS account root user
IAM user name and password Used to access the AWS Management Console
Access keys Typically used with AWS CLI and programmatic requests like APIs and AWS SDKs
Key pairs Used only for specific AWS Cloud services like Amazon EC2
MFA Can be enabled for the root user and IAM users as an extra layer of security

Managing Passwords

Passwords are required to access your AWS account, individual IAM user accounts, AWS Discussion Forums, and the AWS Support Center. You specify the password when you first create the account, and you can change it at any time by going to the Security Credentials page. AWS passwords can be up to 128 characters long and contain special characters, so we encourage you to create a strong password that cannot be easily guessed.

Password Policy Options

The following list describes the options that are available when you configure a password policy for your account:

Require minimum password length. You can specify the minimum number of characters allowed in an IAM user password. You can enter any number from 6 to 128.

Require at least one uppercase letter. You can require that IAM user passwords contain at least one uppercase character from the ISO basic Latin alphabet (A to Z).

Require at least one lowercase letter. You can require that IAM user passwords contain at least one lowercase character from the ISO basic Latin alphabet (a to z).

Require at least one number. You can require that IAM user passwords contain at least one numeric character (0 to 9).

Require at least one non-alphanumeric character. You can require that IAM user passwords contain at least one of the following non-alphanumeric characters: ! @ # $ % ^ & * ( ) _ + - = [ ] { } | '.

Allow users to change their own passwords. You can permit all IAM users in your account to use the IAM console to change their own passwords.

Enable password expiration. You can set IAM user passwords to be valid for only the specified number of days. You specify the number of days that passwords remain valid after they are set. You can choose a password expiration period between 1 and 1,095 days, inclusive.

Prevent password reuse. You can prevent IAM users from reusing a specified number of previous passwords. You can set the number of previous passwords from 1 to 24, inclusive.

Password expiration requires administrator reset. You can prevent IAM users from choosing a new password after their current password has expired. If you leave this checkbox cleared and an IAM user allows her password to expire, the user will be required to set a new password before accessing the AWS Management Console.





Managing IAM Access Keys

AWS requires that all API requests be signed; that is, they must include a digital signature that AWS can use to verify the identity of the requestor. You can calculate the digital signature using a cryptographic hash function. The input to the hash function in this case includes the text of your request and your secret access key. If you use any of the AWS SDKs to generate requests, the digital signature calculation is done for you; otherwise, you can have your application calculate it and include it in your REST or Query requests by following the directions in the AWS documentation.

Not only does the signing process help protect message integrity by preventing tampering with the request while it is in transit, it also helps protect against potential replay attacks. A request must reach AWS within 15 minutes of the timestamp in the request; otherwise, AWS denies the request.

The most recent version of the digital signature calculation process is Signature Version 4, which calculates the signature using the Keyed-Hash Message Authentication Code (HMAC)-Secure Hash Algorithm (SHA) 256 protocol. Version 4 provides an additional measure of protection over previous versions by requiring that you sign the message using a key that is derived from your secret access key instead of using the secret access key itself. In addition, you derive the signing key based on credential scope, which facilitates cryptographic isolation of the signing key.

Because access keys can be misused if they fall into the wrong hands, we encourage you to save them in a safe place and not embed them in your code. For customers with large fleets of elastically scaling Amazon EC2 instances, the use of IAM roles can be a more secure and convenient way to manage the distribution of access keys. IAM roles provide temporary credentials, which are not only automatically loaded to the target instance, but are also automatically rotated multiple times a day.

Access keys comprise two components:

  • Access key ID
  • Secret access key

The access key is active by default. Each user can have two active access keys. This enables access keys to be rotated without the user temporarily losing access to his AWS account. Users can be given permissions to list, rotate, and manage their own keys.

Access keys can be disabled or deleted to revoke access. Disabling an access key makes it inactive. A deleted access key is removed forever and cannot be retrieved.

A best practice is to rotate access keys regularly for all of the IAM users in an AWS account. Unnecessary credentials should be removed from users who do not need them. IAM can be used to obtain an access key history, which includes the time when the key was last used along with the region and service that was accessed. This information helps when rotating old keys and removing active keys in an AWS account.

The IAM access key can be placed in:

  • Linux: ~/.aws/credentials file
  • Windows: \%USERPROFILE%.awscredentials

The root user access keys should not be used. The recommended practice is to delete the root user access keys.

Multi-Factor Authentication (MFA)

MFA adds an additional layer of security when accessing AWS Cloud services. When you enable this optional feature, you will need to provide a six-digit, single-use code in addition to your standard user name and password credentials before access is granted to your AWS account settings or AWS Cloud services and resources (for example, providing something you have [MFA device] and something you know [a password]). You can enable MFA devices for your AWS account and for the users you have created under your AWS account using IAM. In addition, you can add MFA protection for access across AWS accounts for when you want to allow a user you’ve created under one AWS account to use an IAM role to access resources under another AWS account. You can require the user to use MFA before assuming the role as an additional layer of security.

The IAM service supports the following MFA types:

  • Hardware devices (Gemalto)
  • Virtual MFA applications (such as Google Authenticator)
  • Simple Message Service (SMS) (via mobile devices) can be used for MFA with an AWS account.

Virtual MFA applications are software applications, typically installed on smart phones, that generate authentication codes. These authentication codes need to be compatible with the Time-based One Time Password (TOTP) standard in order to be used with AWS accounts.

SMS MFA uses SMS text messaging to verify an IAM user. When the user signs in to the AWS Management Console, she will receive an authentication code via text message that she will enter into her browser. Users do not need to use a hardware token or a virtual MFA application if they use SMS MFA.

IAM Groups

An IAM group is a collection of IAM users. There is no default user group in an AWS account. You can create IAM groups as appropriate for the account. IAM groups cannot be nested. An IAM user can be a member of multiple groups. Permissions can be specified for the entire group. Creating IAM groups and assigning permissions to them is the recommended approach. This practice eases the administrative burden when compared to specifying permissions for individual IAM users. Permissions are defined using IAM policy documents.

IAM Policies

IAM Polices are JSON-formatted permission documents that are used to grant or deny access to IAM users. By default, an IAM user cannot do anything with any service in an AWS account, even if he has authentication credentials. Permission to take actions with a service must be explicitly provided to individual users, groups, or roles.

A least privilege approach should be used when assigning permissions so that there are just enough permissions to perform a job or set of tasks. Only grant the permissions necessary to perform a task. Least privilege reduces the chances of mistakes being made when actions are performed. It is easier to loosen permissions than tighten them when implementing security. IAM permissions can provide a granular level of control.

An IAM policy consists of four main elements:

  • Action
  • Effect
  • Resource
  • Conditions (optional)

IAM policy rules have an order of precedence. If an action is explicitly denied, it is denied; then if an action is explicitly allowed, it is allowed. Otherwise, all other actions are implicitly denied.

An IAM policy can also use policy conditions for extra security. For example, conditions could be set so that requests must originate from a specific range of IP addresses, or a request must use SSL or MFA.

There are two types of IAM policies:

Managed policies Managed policies can be customer- and AWS-managed. These policies are standalone policies that are attached to multiple users, groups, and roles. AWS-managed policies are created by AWS. Customer-managed policies are created and managed by customers. Managed policies provide several features, including reusability, central change management, versioning and rollback, and the ability to delegate permissions management to other users. Examples of the managed policies provided by AWS include administrator access, power user access, read-only access, and IAM read-only access.

Inline policies Inline policies are embedded in a principal entity such as a user, group, or role. The policy is an inherent part of the entity. The same policy can be used for multiple entities, but those entities do not share the policy. Instead, each entity has its own copy of the policy (that is, inline policies cannot be centrally managed).


IAM Roles

IAM Roles are temporary user credentials. IAM user credentials are regarded as static credentials and static AWS credentials should not be embedded in software source code or placed on Amazon EC2 instances—as there is a chance they could be compromised and used for unauthorized access to resources in an AWS account.

The principle of least privilege implies that an entity should not have excessive sets of permissions. If an IAM user undertakes a task sporadically, there is the potential for them to have a permission set that is too powerful for their daily activities.

IAM roles are used to delegate access to AWS resources. An IAM role provides temporary access and eliminates the need to use static AWS credentials. Think of IAM roles as temporary users. The IAM roles do not have a user name or password associated with them; instead, they have temporary credentials consisting of the following:

  • Access key ID
  • Secret access key
  • Token
  • Duration

IAM roles enable temporary credentials to be created and issued when needed for the following:

  • Applications written with a language-specific AWS SDK
  • Amazon EC2 instance access to an AWS resource
  • IAM user to gain temporary credentials to undertake a more powerful action (for example, Amazon EC2 instance termination) only when needed
  • Federated user accounts

Just as permission documents are used to grant or deny access to IAM users, they are used to grant/deny access to IAM roles. In the case of the IAM role, the policy is attached to the role (not to the IAM user or IAM group member assuming the IAM role). By default, AWS resources cannot interact with AWS Cloud services; the resources need to have the necessary permissions. IAM roles can be used to provide AWS resources with access to AWS Cloud services.

IAM roles can be used to provide access to resources and services within an AWS account to externally authenticated users and third parties. External users authenticate against an external identity store, and federation allows them to get temporary credentials when they assume a role.

An IAM user can switch to an IAM role to gain access to resources in their current AWS account that are not accessible with their normal permissions. An IAM user who requires cross-account access can use an IAM role to gain access to resources in another AWS account.

Best Practices for Securing Your AWS Account

AWS recommends several best practices for securing your AWS account:

  1. Require MFA for root-level access.
  2. Do not share root credentials with anyone other than the account holder.
  3. Physically secure root account hardware MFA devices in a safe place, such as a vault.
  4. Create individual IAM users.
  5. Use groups to assign permissions to IAM users.
  6. Enable MFA for privileged users.
  7. Use IAM roles for applications that run on Amazon EC2 instances.
  8. Delegate by using IAM roles instead of sharing credentials.
  9. Rotate credentials regularly.
  10. Remove unnecessary credentials.
  11. Use policy conditions for extra security.
  12. Monitor activity on your AWS account.
  13. Remove root credentials.
  14. Use access levels to review IAM permissions.
  15. Use AWS-defined policies to assign permissions whenever possible.
  16. Use IAM roles to provide cross-account access.

So far, you have learned about the shared responsibility model and how to secure your IAM account. Now let’s talk about securing your AWS resources.

Securing Your AWS Cloud Services

In this section, we discuss Amazon EC2 key pairs, X.509 certificates, AWS Key Management Services (AWS KMS), and AWS CloudHSM.

Key Pairs

Amazon EC2 instances created from a public AMI use a public/private key pair instead of a password for signing in via SSH. The public key is embedded in your instance, and you use the private key to sign in securely without a password. After you create your own AMIs, you can choose other mechanisms to log in securely to your new instances. You can have a key pair generated automatically for you when you launch the instance, or you can upload your own. Save the private key in a safe place on your system, and record the location where you saved it. For Amazon CloudFront, you use key pairs to create signed URLs for private content, such as when you want to distribute restricted content that someone paid for.


X.509 Certificates

X.509 certificates are used to sign SOAP-based requests. X.509 certificates contain a public key and additional metadata (for example, an expiration date that AWS verifies when you upload the certificate) and is associated with a private key. When you create a request, you create a digital signature with your private key and then include that signature in the request, along with your certificate. AWS verifies that you are the sender by decrypting the signature with the public key that is in your certificate. AWS also verifies that the certificate you sent matches the certificate that you uploaded to AWS.

AWS Key Management Service (AWS KMS) Security

AWS KMS provides a simple interface that can be used to generate and manage cryptographic keys and operate as a cryptographic service provider for protecting data. AWS KMS offers traditional key management services integrated with AWS Cloud services to provide a consistent view of customers’ keys across AWS, with centralized management and auditing.

AWS KMS provides a simple web interface in the AWS Management Console, AWS CLI, and RESTful APIs to access an elastic, multi-tenant, Hardened Security Appliance (HSA).

You can establish your own HSA-based cryptographic contexts under your master keys. These keys are accessible only on the HSAs, and they can be used to perform HSA-resident cryptographic operations, including the issuance of application data keys (encrypted under your master key). You can create multiple master keys, each represented with an HS-based Customer Master Key (CMK) identified by its key ID. You can use the AWS KMS console to define access controls on who can manage and/or use master keys by creating a policy that is attached to the key. This allows you to define application-specific uses for your keys on a per-API basis.

All requests to AWS KMS must be made over the TLS protocol and terminate on an AWS KMS host. AWS KMS hosts will only allow TLS with a cipher suite that provides perfect forward secrecy. AWS KMS authenticates and authorizes customer requests using the same credential and policy mechanisms available for all other AWS APIs, including IAM. AWS KMS is designed to meet the following requirements:

Durability The durability of cryptographic keys is designed to equal that of the highest-durability services in AWS. A single cryptographic key can encrypt large volumes of customer data accumulated over a long time period. Data encrypted under a key becomes irretrievable if the key is lost.

Quorum-based access No single Amazon employee can gain access to CMKs. There is no mechanism to export plaintext CMKs. Confidentiality of your cryptographic keys is crucial.

Access control Use of keys is protected by access control policies defined and managed by you.

Low-latency and high throughput AWS KMS will provide cryptographic operations at latency and throughput levels suitable for use by other services in AWS.

Regional independence AWS provides regional independence for customer data. Key usage is isolated in an AWS Region.

Secure source of random numbers Because strong cryptography depends on truly unpredictable random number generation, AWS provides a high-quality source of random numbers.

Audit AWS records the use of cryptographic keys in AWS CloudTrail Logs. Customers can use AWS CloudTrail Logs to inspect use of their cryptographic keys, including use of keys by AWS Cloud services on the customer’s behalf.

The AWS KMS system includes a set of AWS KMS operators and service host operators (collectively, “operators”) that administer “domains.” A domain is a regionally defined set of AWS KMS servers, HSAs, and operators. Each entity has a hardware token that contains a private and public key pair used to authenticate its actions. The HSAs have an additional private and public key pair used to establish encryption keys to protect HSA-to-HSA communications.

AWS CloudHSM Security

The AWS CloudHSM service provides dedicated access to a Hardware Security Module (HSM) appliance designed to provide secure cryptographic key storage and operations in an intrusion-resistant, tamper-evident device. You can generate, store, and manage the cryptographic keys used for data encryption so that they are accessible only by you.

AWS CloudHSM appliances are designed to store and process cryptographic key material securely for a wide variety of uses such as database encryption, Digital Rights Management (DRM), Public Key Infrastructure (PKI), authentication and authorization, document signing, and transaction processing. They support some of the strongest cryptographic algorithms available, including Advanced Encryption Standard (AES), RSA, Elliptic Curve Cryptography (ECC), and many others. The AWS CloudHSM service is designed to be used with Amazon EC2 and Amazon VPC, which provide the appliance with its own private IP within a private subnet. You can connect to AWS CloudHSM appliances from your Amazon EC2 servers through SSL/TLS, which uses two-way digital certificate authentication and 256-bit SSL encryption to provide a secure communication channel.

Selecting AWS CloudHSM in the same region as your Amazon EC2 instance decreases network latency, which can improve your application performance. You can configure a client on your Amazon EC2 instance that allows your applications to use the APIs provided by the HSM, including Public-Key Cryptography Standards (PKCS) #11, Microsoft Cryptographic Application Programming Interface (CAPI), and Java Cryptography Architecture/Java Cryptography Extensions (Java JCA/JCE).

AWS has administrative credentials to the appliance, but these credentials can only be used to manage the appliance, not the HSM partitions on the appliance. AWS uses these credentials to monitor and maintain the health and availability of the appliance.

AWS cannot extract your keys, nor can AWS cause the appliance to perform any cryptographic operation using your keys. The HSM appliance has both physical and logical tamper detection and response mechanisms that erase the cryptographic key material and generate event logs if tampering is detected. The HSM is designed to detect tampering if the physical barrier of the HSM appliance is breached. In addition, after three unsuccessful attempts to access an HSM partition with HSM admin credentials, the HSM appliance erases its HSM partitions.

When your AWS CloudHSM subscription ends and you have confirmed that the contents of the HSM are no longer needed, you must delete each partition and its contents as well as any logs. As part of the decommissioning process, AWS zeroizes the appliance, permanently erasing all key material.

Monitoring to Enhance Security

In this section, you are introduced to the various monitoring tools offered by AWS. This section serves as an overview of the tools. More details can be found in Chapter 9, “Monitoring and Metrics.”

AWS CloudTrail

As important as credentials and encrypted endpoints are for preventing security problems, logs are just as crucial for understanding events after a problem has occurred. To be effective as a security tool, a log must include not just a list of what happened and when, but also identify the source. To help you with your after-the-fact investigations and near-real-time intrusion detection, AWS CloudTrail provides a log of all requests for AWS resources in your account. For each event, you can see what service was accessed, what action was performed, and who made the request. AWS CloudTrail captures information about every API call to every AWS resource you use, including sign-in events. After you have enabled AWS CloudTrail, event logs are delivered every five minutes. You can configure AWS CloudTrail so that it aggregates log files from multiple regions into a single Amazon S3 bucket. From there, you can then upload them to your preferred log management and analysis solutions to perform security analysis and detect user behavior patterns. By default, log files are stored securely in Amazon S3, but you can also archive them to Amazon Glacier to help meet audit and compliance requirements. More information on Amazon Glacier and Amazon S3 is provided in Chapter 6, “Storage.”

In addition to AWS CloudTrail’s user activity logs, you can use the Amazon CloudWatch Logs feature to collect and monitor system, application, and custom log files from your Amazon EC2 instances and other sources in near real time. For example, you can monitor your web server’s log files for invalid user messages to detect unauthorized login attempts to your guest operating system. More information on Amazon CloudWatch and Amazon CloudWatch logs can be found in Chapter 9.

Amazon Virtual Private Cloud (Amazon VPC) Flow Logs

AWS CloudTrail captures the API calls made as users interact with the AWS Cloud services and resources in an AWS account. You still need to have a complete view of what is happening within your Amazon VPC (for example, network traffic flowing into and out of your Amazon VPC).

Amazon VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your Amazon VPC. Flow log data is stored using Amazon CloudWatch Logs. After you have created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

In addition, Elastic Load Balancing and elastic network interfaces provide access logs that capture detailed information about requests or connections sent to your load balancer. Each log contains information such as the time it was received, the client’s IP address, latencies, request paths, and server responses. Detailed information on Amazon VPC Flow Logs can be found in Chapter 5.

Amazon CloudWatch

Amazon CloudWatch provides a means of monitoring the use of AWS resources. There are standard metrics provided by AWS for a variety of AWS resources. Users can also create their own custom metrics using agents that they have installed to feed data to Amazon CloudWatch for monitoring.

Amazon CloudWatch Logs enable the operating system and other logs to be monitored by Amazon CloudWatch. Additionally, AWS CloudTrail logs can be monitored in real time using Amazon CloudWatch Logs. Pattern filtering is used to analyze the logs. The log events are evaluated for matches against terms, phrases, and values specified in the pattern filter. Amazon CloudWatch Alarms can be created to report out-of-bound conditions discovered in log files. Amazon CloudWatch Alarms are triggered based on thresholds that you specified in the alarm. The alarm can be configured to send notifications and perform an action.

AWS Config

AWS Config records configuration changes to an AWS account. AWS Config can be used to retrieve an inventory of AWS resources in an AWS account at a particular time. AWS Config can be used to identify new and deleted resources. AWS Config can issue notifications when resource configurations change. AWS Config use cases include the following:

  • Resource discovery
  • Troubleshooting
  • Change management
  • Audit compliance
  • Security analysis

AWS Config Rules allow rules to be set up to check configuration changes recorded by AWS Config. There are prebuilt rules provided by AWS. You can also author your own rules, which will run on AWS Lambda. The rules are invoked automatically when triggered to provide continuous assessment. A dashboard can be used to visualize compliance and identify any offending changes.

AWS Config can trigger a rules evaluation when any resource that matches the rule’s scope changes in configuration (as when a security group has its rules modified). A rule can also be triggered to evaluate your account’s configuration periodically. For example, the AWS Config service runs evaluations for the rule at a chosen frequency (such as every 24 hours). For more information on AWS Config, refer to Chapter 9.

Amazon Inspector

Amazon Inspector is an automated security assessment service that helps improve the security and compliance of applications deployed on AWS. Amazon Inspector automatically assesses applications for vulnerabilities or deviations from best practices. After performing an assessment, Amazon Inspector produces a detailed list of security findings prioritized by level of severity.

Amazon Inspector includes a knowledge base of hundreds of rules mapped to common security best practices and vulnerability definitions. Examples of built-in rules include checking for remote root login being enabled or vulnerable software versions being installed. These rules are regularly updated by AWS security researchers. More information about AWS Inspector can be found in Chapters 4 and 9.

AWS Certificate Manager

AWS Certificate Manager is a service that lets you provision, manage, and deploy SSL/TLS certificates for use with AWS Cloud services. SSL/TLS certificates are used to secure network communications and establish the identity of websites over the Internet. With AWS Certificate Manager, you can request a certificate and deploy it on AWS resources, such as Elastic Load Balancing load balancers or Amazon CloudFront distributions. AWS Certificate Manager then handles the certificate renewals.

AWS Web Application Firewall (AWS WAF)

AWS Web Application Firewall (AWS WAF) is a web application firewall that helps protect web applications from attacks by allowing you to configure rules that allow, block, or monitor (count) web requests based on conditions that you define. These conditions include IP addresses, HTTP headers, HTTP body, Uniform Resource Identifier (URI) strings, SQL injection, and cross-site scripting.

As the underlying service receives requests for your websites, it forwards those requests to AWS WAF for inspection against your rules. Once a request meets a condition defined in your rules, AWS WAF instructs the underlying service either to block or allow the request based on the action you define.

AWS WAF is tightly integrated with Amazon CloudFront and the Application Load Balancer, services that AWS customers commonly use to deliver content for their websites and applications. When you use AWS WAF on Amazon CloudFront, your rules run in all AWS edge locations around the world close to your end users. This means that security doesn’t come at the expense of performance. Blocked requests are stopped before they reach your web servers. When you use AWS WAF on Application Load Balancer, your rules run in-region and can be used to protect Internet-facing and internal load balancers.

AWS Trusted Advisor

The AWS Trusted Advisor customer support service not only monitors cloud performance and resiliency, but also cloud security. AWS Trusted Advisor inspects your AWS environment and makes recommendations when opportunities may exist to save money, improve system performance, or close security gaps. It provides alerts on several of the most common security misconfigurations that can occur, including leaving certain ports open that make you vulnerable to hacking and unauthorized access, neglecting to create IAM accounts for your internal users, allowing public access to Amazon S3 buckets, not turning on user activity logging (AWS CloudTrail), or not using MFA on your root AWS account. More information on AWS Trusted Advisor checks is located in Chapter 9.

AWS Cloud Service-Specific Security

Not only is security built into every layer of the AWS infrastructure, but it’s also built into each of the services available on that infrastructure. AWS Cloud services are architected to work efficiently and securely with all AWS networks and platforms. Each service provides additional security features to enable you to protect sensitive data and applications.

Compute Services

AWS provides a variety of cloud-based computing services that include a wide selection of compute instances that can scale up and down automatically to meet the needs of your application or enterprise.

Amazon Elastic Compute Cloud (Amazon EC2) Security

Amazon EC2 is a key component in Amazon’s Infrastructure as a Service (IaaS), providing resizable computing capacity using server instances in AWS datacenters. Amazon EC2 is designed to make web-scale computing easier by enabling you to obtain and configure capacity with minimal friction. You create and launch instances, which are collections of platform hardware and software.

Multiple levels of security Security within Amazon EC2 is provided on multiple levels: the operating system of the host platform, the virtual instance operating system or guest operating system, a firewall, and signed API calls. Each of these items builds on the capabilities of the others. The goals are to prevent data contained in Amazon EC2 from being intercepted by unauthorized systems or users and to make Amazon EC2 instances themselves as secure as possible without sacrificing the flexibility in configuration that customers demand.

The Hypervisor Amazon EC2 currently uses a highly-customized version of the Xen Hypervisor, taking advantage of paravirtualization (in the case of Linux guests). Because paravirtualized guests rely on the Hypervisor to provide support for operations that normally require privileged access, the guest operating system has no elevated access to the CPU. The CPU provides four separate privilege modes, called rings: 0–3. Ring 0 is the most privileged and 3 the least. The host operating system executes in Ring 0. Rather than executing in Ring 0 as most operating systems do, the guest operating system runs in lesser-privileged Ring 1, and applications in the least-privileged Ring 3. This explicit virtualization of the physical resources leads to a clear separation between guest and Hypervisor, resulting in additional security separation between the two.

Instance isolation Different instances running on the same physical machine are isolated from each other via the Xen Hypervisor. Amazon is active in the Xen community, which provides AWS with awareness of the latest developments. In addition, the AWS firewall resides within the Hypervisor layer, between the physical network interface and the instance’s virtual interface. All packets must pass through this layer; thus an instance’s neighbors have no more access to that instance than any other host on the Internet and can be treated as if they are on separate physical hosts. The physical RAM is separated using similar mechanisms. Customer instances have no access to raw disk devices, but instead are presented with virtualized disks. The AWS proprietary disk virtualization layer automatically resets every block of storage used by the customer so that one customer’s data is never unintentionally exposed to another customer. In addition, memory allocated to guests is scrubbed (set to zero) by the Hypervisor when it is unallocated to a guest. The memory is not returned to the pool of free memory available for new allocations until the memory scrubbing is complete. Figure 3.3 depicts instance isolation within Amazon EC2.

Chart shows security layers of Amazon EC2 starts with physical interfaces through firewall, customer security groups, virtual Interfaces, hypervisor and reach multiple customers.

FIGURE 3.3 Amazon EC2 multiple layers of security

Host operating system Administrators with a business need to access the management plane are required to use MFA to gain access to purpose-built administration hosts. These administrative hosts are systems that are specifically designed, built, configured, and hardened to protect the management plane of the cloud. All such access is logged and audited. When an employee no longer has a business need to access the management plane, the privileges and access to these hosts and relevant systems can be revoked.

Guest operating system Virtual instances are completely controlled by you, the customer. You have full root access or administrative control over accounts, services, and applications. AWS does not have any access rights to your instances or the guest operating system. AWS recommends a base set of security best practices to include disabling password-only access to your guests and using some form of MFA to gain access to your instances (or at a minimum, certificate-based SSH Version 2 access). Additionally, you should employ a privilege escalation mechanism with logging on a per-user basis. For example, if the guest operating system is Linux, after hardening your instance, you should use certificate-based SSH Version 2 to access the virtual instance, disable remote root login, use command-line logging, and use sudo for privilege escalation. You should generate your own key pairs in order to guarantee that they are unique and not shared with other customers or with AWS. AWS also supports the use of the SSH network protocol to enable you to log in securely to your UNIX/Linux Amazon EC2 instances. Authentication for SSH used with AWS is via a public/private key pair to reduce the risk of unauthorized access to your instance. You can also connect remotely to your Windows instances using Remote Desktop Protocol (RDP) by using an RDP certificate generated for your instance. You also control the updating and patching of your guest operating system, including security updates. Amazon-provided Windows- and Linux-based AMIs are updated regularly with the latest patches, so if you do not need to preserve data or customizations on your running AMI instances, you can simply relaunch new instances with the latest updated AMI. In addition, updates are provided for the Amazon Linux AMI via the Amazon Linux yum repositories.

Firewall Amazon EC2 provides a mandatory inbound firewall that is configured in a default deny-all mode; Amazon EC2 customers must explicitly open the ports needed to allow inbound traffic. The traffic may be restricted by protocol, by service port, and by source IP address (individual IP or Classless Inter-Domain Routing [CIDR] block).

The firewall can be configured in groups, permitting different classes of instances to have different rules. Consider, for example, the case of a traditional three-tiered web application. The group for the web servers would have port 80 (HTTP) and/or port 443 (HTTPS) open to the Internet. The group for the application servers would have port 8000 (application-specific) accessible only to the web server group. The group for the database servers would have port 3306 (MySQL) open only to the application server group. All three groups would permit administrative access on port 22 (SSH), but only from the customer’s corporate network. Highly secure applications can be deployed using this approach, which is depicted in Figure 3.4.

Image of amazon EC2 security group firewall shows access to tiers (ports access: web tier, engineering staff using SSH access: application tier, authorized third parties access: database tiers) and other ports blocked by default.

FIGURE 3.4 Amazon EC2 security group firewall

The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose. Well-informed traffic management and security design are still required on a per-instance basis. AWS further encourages you to apply additional per-instance filters with host-based firewalls such as IP tables or the Windows firewall and VPNs. This can restrict both inbound and outbound traffic.


API access API calls to launch and terminate instances, change firewall parameters, and perform other functions are all signed by your Amazon secret access key, which could be either the AWS account’s secret access key or the secret access key of a user created with IAM. Without access to your secret access key, Amazon EC2 API calls cannot be made on your behalf. API calls can also be encrypted with SSL to maintain confidentiality. AWS recommends always using SSL-protected API endpoints.

Amazon Elastic Block Store (Amazon EBS) security Amazon Elastic Block Store (Amazon EBS) allows you to create storage volumes from 1 GB to 16 TB that can be mounted as devices by Amazon EC2 instances. Storage volumes behave like raw, unformatted block devices with user-supplied device names and a block device interface. You can create a file system on top of Amazon EBS volumes or use them in any other way you would use a block device (like a hard drive). Amazon EBS volume access is restricted to the AWS account that created the volume and to the users under the AWS account created with IAM (if the user has been granted access to the Amazon EBS operations). All other AWS accounts and users are denied the permission to view or access the volume.

Data stored in Amazon EBS volumes is redundantly stored in multiple physical locations as part of normal operation of those services, at no additional charge. However, Amazon EBS replication is stored within the same Availability Zone, not across multiple zones; therefore, it is highly recommended that you conduct regular snapshots and save them to Amazon S3 for long-term data durability. For customers who have architected complex transactional databases using Amazon EBS, it is recommended that backups to Amazon S3 be performed through the database management system so that distributed transactions and logs can be check pointed. AWS does not automatically perform backups of data that are maintained on virtual disks attached to running instances on Amazon EC2.

You can make Amazon EBS volume snapshots publicly available to other AWS accounts to use as the basis for creating duplicate volumes. Sharing Amazon EBS volume snapshots does not provide other AWS accounts with the permission to alter or delete the original snapshot, as that right is explicitly reserved for the AWS account that created the volume. An Amazon EBS snapshot is a block-level view of an entire Amazon EBS volume. Note that data that is not visible through the file system on the volume, such as files that have been deleted, may be present in the Amazon EBS snapshot. If you want to create shared snapshots, you should do so carefully. If a volume has held sensitive data or has had files deleted from it, you should create a new Amazon EBS volume to share. The data to be contained in the shared snapshot should be copied to the new volume and the snapshot created from the new volume.

Amazon EBS volumes are presented to you as raw unformatted block devices, which have been wiped prior to being made available for use. Wiping occurs immediately before reuse so that you can be assured that the wipe process completed. If you have procedures requiring that all data be wiped via a specific method, you have the ability to do so on Amazon EBS. You should conduct a specialized wipe procedure prior to deleting the volume for compliance with your established requirements.

Encryption of sensitive data is generally a good security practice, and AWS provides the ability to encrypt Amazon EBS volumes and their snapshots with AES-256. The encryption occurs on the servers that host the Amazon EC2 instances, providing encryption of data as it moves between Amazon EC2 instances and Amazon EBS storage. In order to be able to do this efficiently and with low latency, the Amazon EBS encryption feature is only available on Amazon EC2’s more powerful instance types.

Networking

AWS provides a range of networking services that enable you to create a logically isolated network that you define, establish a private network connection to the AWS Cloud, use a highly available and scalable Domain Name System (DNS) service, and deliver content to your end users with low latency at high data transfer speeds with a content delivery web service.

Elastic Load Balancing Security

Elastic Load Balancing is used to manage traffic on a fleet of Amazon EC2 instances, distributing traffic to instances across all Availability Zones within a region. Elastic Load Balancing has all of the advantages of an on-premises load balancer, plus several security benefits:

  • Takes over the encryption and decryption work from the Amazon EC2 instances and manages it centrally on the load balancer
  • Offers clients a single point of contact and can also serve as the first line of defense against attacks on the customer’s network
  • When used in an Amazon VPC, supports creation and management of security groups associated with Elastic Load Balancing to provide additional networking and security options
  • Supports end-to-end traffic encryption using TLS (previously SSL) on those networks that use HTTPS connections. When TLS is used, the TLS server certificate used to terminate client connections can be managed centrally on the load balancer, instead of on every individual instance.

HTTPS/TLS uses a long-term secret key to generate a short-term session key to be used between the server and the browser to create the encrypted message. Elastic Load Balancing configures your load balancer with a predefined cipher set that is used for TLS negotiation when a connection is established between a client and your load balancer. The predefined cipher set provides compatibility with a broad range of clients and uses strong cryptographic algorithms. However, some customers may have requirements for allowing only specific ciphers and protocols (for example, PCI DSS, Sarbanes-Oxley Act [SOX]) from clients to ensure that standards are met. In these cases, Elastic Load Balancing provides options for selecting different configurations for TLS protocols and ciphers. You can choose to enable or disable the ciphers depending on your specific requirements.

To help ensure the use of newer and stronger cipher suites when establishing a secure connection, you can configure the load balancer to have the final say in the cipher suite selection during the client-server negotiation. When the server order preference option is selected, the load balancer will select a cipher suite based on the server’s prioritization of cipher suites rather than the client’s. This gives you more control over the level of security that clients use to connect to your load balancer.

For even greater communication privacy, Elastic Load Balancing allows the use of perfect forward secrecy, which uses session keys that are ephemeral and not stored anywhere. This prevents the decoding of captured data, even if the secret long-term key itself is compromised.

Elastic Load Balancing allows you to identify the originating IP address of a client connecting to your servers, whether you’re using HTTPS or TCP load balancing. Typically, client connection information, such as IP address and port, is lost when requests are proxied through a load balancer. This is because the load balancer sends requests to the server on behalf of the client, making your load balancer appear as though it is the requesting client. Having the originating client IP address is useful if you need more information about visitors to your applications in order to gather connection statistics, analyze traffic logs, or manage whitelists of IP addresses.

Elastic Load Balancing access logs contain information about each HTTP and TCP request processed by your load balancer. This includes the IP address and port of the requesting client, the back-end IP address of the instance that processed the request, the size of the request and response, and the actual request line from the client (for example, GET http://www.example.com: 80/HTTP/1.1). All requests sent to the load balancer are logged, including requests that never make it to back-end instances (more on Elastic Load Blancing can be found in Chapter 5).

Amazon Virtual Private Cloud (Amazon VPC) Security

Normally, each Amazon EC2 instance you launch is randomly assigned a public IP address in the Amazon EC2 address space. Amazon VPC enables you to create an isolated portion of the AWS Cloud and launch Amazon EC2 instances that have private (RFC 1918) addresses in the range of your choice (for example, 10.0.0.0/16). You can define subnets in your Amazon VPC, grouping similar kinds of instances based on IP address range, and then set up routing and security to control the flow of traffic in and out of the instances and subnets.

Security features in Amazon VPC include security groups, network ACLs, routing tables, and external gateways. Each of these items is complementary to providing a secure, isolated network that can be extended through selective enabling of direct Internet access or private connectivity to another network. Amazon EC2 instances running in an Amazon VPC inherit all of the benefits described next that are related to the guest operating system and protection against packet sniffing. Note, however, that you must create security groups specifically for your Amazon VPC; any Amazon EC2 security groups that you have created will not work inside your Amazon VPC. In addition, Amazon VPC security groups have additional capabilities that Amazon EC2 security groups do not have, such as being able to change the security group after the instance is launched and being able to specify any protocol with a standard protocol number (as opposed to just TCP, User Datagram Protocol [UDP], or Internet Control Message Protocol [ICMP]).

Each Amazon VPC is a distinct, isolated network in the cloud; network traffic in each Amazon VPC is isolated from all other Amazon VPCs. At creation time, you select an IP address range for each Amazon VPC. You may create and attach an Internet gateway, virtual private gateway, or both to establish external connectivity, subject to the following controls.

API access Calls to create and delete Amazon VPCs; change routing, security group, and network ACL parameters; and perform other functions are all signed by your Amazon secret access key, which could be either the AWS account’s secret access key or the secret access key of a user created with IAM. Without access to your secret access key, Amazon VPC API calls cannot be made on your behalf. In addition, API calls can be encrypted with SSL to maintain confidentiality. AWS recommends always using SSL-protected API endpoints. IAM also enables a customer to further control what APIs a newly created user has permissions to call.

Subnets and route tables You create one or more subnets in each Amazon VPC. Each instance launched in the Amazon VPC is connected to one subnet. Traditional Layer 2 security attacks, including MAC spoofing and Address Resolution Protocol (ARP) spoofing, are blocked. Each subnet in an Amazon VPC is associated with a routing table, and all network traffic leaving the subnet is processed by the routing table to determine the destination.

Firewall (security groups) Like Amazon EC2, Amazon VPC supports a complete firewall solution, enabling filtering on both ingress and egress traffic from an instance. The default group enables inbound communication from other members of the same group and outbound communication to any destination. Traffic can be restricted by any IP protocol, by service port, and by source/destination IP address (individual IP or CIDR block). The firewall isn’t controlled through the guest operating system; rather, it can be modified only through the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access to different administrative functions on the instances and the firewall, therefore enabling you to implement additional security through separation of duties. The level of security afforded by the firewall is a function of which ports you open and for what duration and purpose. Well-informed traffic management and security design are still required on a per-instance basis. AWS further encourages you to apply additional per-instance filters with host-based firewalls, such as IP tables or the Windows Firewall. Figure 3.5 illustrates an Amazon VPC with two types of subnets—public and private—and two network paths with two different networks—a customer datacenter and the Internet.

Image shows AWS region having router between zone A (virtual private gateway connecting customer datacenter and regional office) and zone B (intergateway connecting Internet and public subnet).

FIGURE 3.5 Amazon VPC network architecture

Network ACLs To add a further layer of security within Amazon VPC, you can configure network ACLs. These are stateless traffic filters that apply to all traffic inbound or outbound from a subnet within Amazon VPC. These ACLs can contain ordered rules to allow or deny traffic based on IP protocol, by service port, and source/destination IP address.

Like security groups, network ACLs are managed through Amazon VPC APIs, adding an additional layer of protection and enabling additional security through separation of duties. Figure 3.6 depicts how the security controls discussed thus far interrelate to enable flexible network topologies while providing complete control over network traffic flows.

Image shows VPC having internet and virtual private gateway connected through router to route table, network ACL, security group, and instance of subnets 1 and 2 respectively.

FIGURE 3.6 Flexible network topologies

Virtual private gateways A virtual private gateway enables private connectivity between the Amazon VPC and another network. Network traffic within each virtual private gateway is isolated from network traffic within all other virtual private gateways. You can establish VPN connections to the virtual private gateway from gateway devices at your premises. Each connection is secured by a pre-shared key in conjunction with the IP address of the customer gateway device.

Internet gateways An Internet gateway may be attached to an Amazon VPC to enable direct connectivity to Amazon S3, other AWS Cloud services, and the Internet. Each instance desiring this access must either have an Elastic IP associated with it or route traffic through a Network Address Translation (NAT) instance. Additionally, network routes are configured to direct traffic to the Internet gateway. AWS provides reference NAT AMIs that you can extend to perform network logging, deep packet inspection, application layer filtering, or other security controls.

This access can only be modified through the invocation of Amazon VPC APIs. AWS supports the ability to grant granular access to different administrative functions on the instances and the Internet gateway, enabling you to implement additional security through separation of duties.

Dedicated instances Within an Amazon VPC, you can launch Amazon EC2 instances that are physically isolated at the host hardware level (that is, they will run on single-tenant hardware). An Amazon VPC can be created with “dedicated” tenancy so that all instances launched into the Amazon VPC will use this feature. Alternatively, an Amazon VPC may be created with “default” tenancy, but you can specify dedicated tenancy for particular instances launched into it. More information on networking on AWS can be found in Chapter 5.

Dedicated hosts An Amazon EC2 Dedicated Host is a physical server with Amazon EC2 instance capacity fully dedicated to your use. Dedicated hosts allow you to use your existing per-socket, per-core, or per-virtual machine software licenses, including Windows Server, Microsoft SQL Server, SUSE, Linux Enterprise Server, and so on. More information on dedicated hosts can be found in Chapter 4.

Amazon CloudFront Security

Amazon CloudFront gives customers an easy way to distribute content to end users with low latency and high data transfer speeds. It delivers dynamic, static, and streaming content using a global network of edge locations. Requests for customers’ objects are automatically routed to the nearest edge location, so content is delivered with the best possible performance. Amazon CloudFront is optimized to work with other AWS Cloud services like Amazon S3, Amazon EC2, Elastic Load Balancing, and Amazon Route 53. It also works seamlessly with any non-AWS origin server that stores the original, definitive versions of your files.

Amazon CloudFront requires that every request made to its control API is authenticated so that only authorized users can create, modify, or delete their own Amazon CloudFront distributions. Requests are signed with an HMAC-SHA-1 signature calculated from the request and the user’s private key. Additionally, the Amazon CloudFront control API is only accessible via SSL-enabled endpoints.

There is no guarantee of durability of data held in Amazon CloudFront edge locations. The service may sometimes remove objects from edge locations if those objects are not requested frequently. Durability is provided by Amazon S3, which works as the origin server for Amazon CloudFront by holding the original, definitive copies of objects delivered by Amazon CloudFront.

If you want control over who can download content from Amazon CloudFront, you can enable the service’s private content feature. This feature has two components. The first controls how content is delivered from the Amazon CloudFront edge location to viewers on the Internet. The second controls how the Amazon CloudFront edge locations access objects in Amazon S3. Amazon CloudFront also supports geo-restriction, which restricts access to your content based on the geographic location of your viewers.

To control access to the original copies of your objects in Amazon S3, Amazon CloudFront allows you to create one or more origin access identities and associate these with your distributions. When an origin access identity is associated with an Amazon CloudFront distribution, the distribution will use that identity to retrieve objects from Amazon S3. You can then use Amazon S3’s ACL feature, which limits access to that origin access identity so that the original copy of the object is not publicly readable.

To control who can download objects from Amazon CloudFront edge locations, the service uses a signed-URL verification system. To use this system, you first create a public-private key pair and upload the public key to your account via the AWS Management Console. You then configure your Amazon CloudFront distribution to indicate which accounts you would authorize to sign requests. You can indicate up to five AWS accounts that you trust to sign requests. As you receive requests, you will create policy documents indicating the conditions under which you want Amazon CloudFront to serve your content. These policy documents can specify the name of the object that is requested, the date and time of the request, and the source IP (or CIDR range) of the client making the request. You then calculate the SHA-1 hash of your policy document and sign this using your private key. Finally, you include both the encoded policy document and the signature as query string parameters when you reference your objects. When Amazon CloudFront receives a request, it will decode the signature using your public key. Amazon CloudFront will only serve requests that have a valid policy document and matching signature.

Note that private content is an optional feature that must be enabled when you set up your Amazon CloudFront distribution. Content delivered without this feature enabled will be publicly readable.

Amazon CloudFront provides the option to transfer content over an encrypted connection (HTTPS). By default, Amazon CloudFront will accept requests over both HTTP and HTTPS protocols. You can also configure Amazon CloudFront to require HTTPS for all requests or have Amazon CloudFront redirect HTTP requests to HTTPS. You can even configure Amazon CloudFront distributions to allow HTTP for some objects but require HTTPS for other objects. More information on Amazon CloudFront can be found in Chapters 6 and 5.

Storage

AWS provides low-cost data storage with high durability and availability. AWS offers storage choices for backup, archiving, disaster recovery, and block and object storage.

Amazon Simple Storage Service (Amazon S3) Security

Amazon S3 allows you to upload and retrieve data at any time from anywhere on the web. Amazon S3 stores data as objects in buckets. An object can be any kind of file: a text file, a photo, a video, and more. When you add a file to Amazon S3, you have the option of including metadata with the file and setting permissions to control access to the file. For each bucket, you can control access to the bucket (who can create, delete, and list objects in the bucket), view access logs for the bucket and its objects, and choose the geographical region where Amazon S3 will store the bucket and its contents.

Data Access

Access to data stored in Amazon S3 is restricted by default; only bucket and object owners have access to the Amazon S3 resources that they create (note that a bucket/object owner is the AWS account owner, not the user who created the bucket/object). There are multiple ways to control access to buckets and objects.

IAM policies IAM enables organizations with many employees to create and manage multiple users under a single AWS account. IAM policies are attached to the users, enabling centralized control of permissions for users under your AWS account to access buckets or objects. With IAM policies, you can only grant users in your own AWS account permission to access your Amazon S3 resources.

ACLs Within Amazon S3, you can use ACLs to give read or write access on buckets or objects to groups of users. With ACLs, you can only grant other AWS accounts (not specific users) access to your Amazon S3 resources.

Bucket policies Bucket policies in Amazon S3 can be used to add or deny permissions across some or all of the objects in a single bucket. Policies can be attached to users, groups, or Amazon S3 buckets, enabling centralized management of permissions. With bucket policies, you can grant users in your AWS account or other AWS accounts access to your Amazon S3 resources.

Query string authentication You can use a query string to express a request entirely in a URL. In this case, you use query parameters to provide request information, including the authentication information. Because the request signature is part of the URL, this type of URL is often referred to as a pre-signed URL. You can use pre-signed URLs to embed clickable links, which can be valid for up to seven days, in HTML.

You can further restrict access to specific resources based on certain conditions. For example, you can restrict access based on request time (date condition), whether the request was sent using SSL (Boolean conditions), a requester’s IP address (IP address condition), or the requester’s client application (string conditions). To identify these conditions, you use policy keys.

Amazon S3 also gives developers the option to use query string authentication, which allows them to share Amazon S3 objects through URLs that are valid for a predefined period of time. Query string authentication is useful for giving HTTP for browser access to resources that would normally require authentication. The signature in the query string secures the request.

Data Transfer

For maximum security, you can securely upload/download data to Amazon S3 via the SSL-encrypted endpoints. The encrypted endpoints are accessible from both the Internet and from within Amazon EC2, so that data is transferred securely both within AWS and to and from sources outside of AWS.

Data Storage

Amazon S3 provides multiple options for protecting data at rest. If you prefer to manage your own encryption, you can use a client encryption library like the Amazon S3 Encryption Client to encrypt data before uploading to Amazon S3. Alternatively, you can use Amazon S3 Server Side Encryption (SSE) if you prefer to have Amazon S3 manage the encryption process for you. Data is encrypted with a key generated by AWS or with a key that you supply, depending on your requirements. With Amazon S3 SSE, you can encrypt data on upload simply by adding an additional request header when writing the object. Decryption happens automatically when data is retrieved. Note that metadata, which you can include with your object, is not encrypted.


Amazon S3 SSE uses one of the strongest block ciphers available: AES-256. With Amazon S3 SSE, every protected object is encrypted with a unique encryption key. This object key itself is then encrypted with a regularly rotated master key. Amazon S3 SSE provides additional security by storing the encrypted data and encryption keys in different hosts. Amazon S3 SSE also makes it possible for you to enforce encryption requirements. For example, you can create and apply bucket policies that require that only encrypted data can be uploaded to your buckets.

When an object is deleted from Amazon S3, removal of the mapping from the public name to the object starts immediately and is generally processed across the distributed system in several seconds. Once the mapping is removed, there is no remote access to the deleted object. The underlying storage area is then reclaimed for use by the system.

Amazon S3 Standard is designed to provide 99.999999999 percent durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001 percent of objects. For example, if you store 10,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000,000 years. In addition, Amazon S3 is designed to sustain the concurrent loss of data in two facilities.

Access Logs

An Amazon S3 bucket can be configured to log access to the bucket and objects in it. The access log contains details about each access request, including request type, the requested resource, the requestor’s IP, and the time and date of the request. When logging is enabled for a bucket, log records are periodically aggregated into log files and delivered to the specified Amazon S3 bucket.

Cross-Origin Resource Sharing (CORS)

AWS customers who use Amazon S3 to host static web pages or store objects used by other web pages can load content securely by configuring an Amazon S3 bucket to explicitly enable cross-origin requests. Modern browsers use the same-origin policy to block JavaScript or HTML5 from allowing requests to load content from another site or domain as a way to help ensure that malicious content is not loaded from a less reputable source (such as during cross-site scripting attacks). With the Cross-Origin Resource Sharing (CORS) policy enabled, assets such as web fonts and images stored in an Amazon S3 bucket can be safely referenced by external web pages, style sheets, and HTML5 applications. For more information on Amazon S3, refer to Chapter 6.

Amazon Glacier Security

Like Amazon S3, the Amazon Glacier service provides low-cost, secure, and durable storage. Where Amazon S3 is designed for rapid retrieval, Amazon Glacier is meant to be used as an archival service for data that is not accessed often and for which retrieval times of several hours are suitable.

Amazon Glacier stores files as archives in vaults. Archives can consist of any data such as a photo, video, or document, and can contain one or several files. You can store an unlimited number of archives in a single vault and can create up to 1,000 vaults per region. Each archive can contain up to 40 TB of data.

Data Transfer

For maximum security, you can securely upload/download data to Amazon Glacier via the SSL-encrypted endpoints. The encrypted endpoints are accessible from both the Internet and from within Amazon EC2, so that data is transferred securely both within AWS and to and from sources outside of AWS.

Data Retrieval

Retrieving archives from Amazon Glacier requires the initiation of a retrieval job, which is generally completed in three to five hours. You can then access the data via HTTP GET requests. The data will remain available to you for 24 hours. You can retrieve an entire archive or several files from an archive. If you want to retrieve only a subset of an archive, you can use one retrieval request to specify the range of the archive that contains the files in which you are interested, or you can initiate multiple retrieval requests, each with a range for one or more files.

You can also limit the number of vault inventory items retrieved by filtering on an archive creation date range or by setting a maximum items limit. Whichever method you choose, when you retrieve portions of your archive, you can use the supplied checksum to help ensure the integrity of the files, provided that the range that is retrieved is aligned with the tree hash of the overall archive.

Data Storage

Amazon Glacier automatically encrypts data using AES-256 and stores it durably in an immutable form. Amazon Glacier is designed to provide average annual durability of 99.999999999 percent for an archive. It stores each archive in multiple facilities and multiple devices. Unlike traditional systems, which can require laborious data verification and manual repair, Amazon Glacier performs regular, systematic data integrity checks and is built to be self-healing.

Data Access

Only your account can access your data in Amazon Glacier. To control access to your data in Amazon Glacier, you can use IAM to specify which users in your account have rights to operations on a given vault.

AWS Snowball

AWS Snowball is a data transport solution that accelerates moving terabytes to petabytes of data into and out of AWS using storage appliances designed to be secure for physical transport.

Data Transfer

When you’re using a standard AWS Snowball appliance to import data into Amazon S3, all data transferred to an AWS Snowball appliance has two layers of encryption:

  1. A layer of encryption is applied in the memory of your local workstation. This layer is applied whether you’re using the Amazon S3 Adapter for AWS Snowball or the AWS Snowball client. This encryption uses AES Galois/Counter Mode (GCM) 256-bit keys, and the keys are cycled for every 60 GB of data transferred.
  2. SSL encryption is a second layer of encryption for all data going onto or off of a standard AWS Snowball appliance.

AWS Snowball uses SSE to protect data at rest.

Data Retrieval

To use AWS Snowball export, simply sign in to the AWS Management Console, choose AWS Snowball, and create an export job. As with an import job, you specify the AWS Region and Amazon S3 buckets that you want to use. AWS Snowball encrypts all data with 256-bit encryption.

Data Storage

AWS Snowball encrypts all data with 256-bit encryption. You manage your encryption keys by using AWS KMS. Your keys are never sent to or stored on the appliance.

Data Cleansing

When the data transfer job has been processed and verified, AWS performs a software erasure of the AWS Snowball appliance that follows the National Institute of Standards and Technology (NIST) guidelines for media sanitization.

More information on AWS Snowball is available in Chapter 6.

AWS Storage Gateway Security

The AWS Storage Gateway service connects your on-premises software appliance with cloud-based storage to provide seamless and secure integration between your IT environment and AWS storage infrastructure. The service enables you to upload data securely to AWS scalable, reliable, and secure Amazon S3 storage service for cost-effective backup and rapid disaster recovery.

Data Transfer

Data is asynchronously transferred from your on-premises storage hardware to AWS over SSL.

Data Storage

The data is stored encrypted in Amazon S3 using AES-256, a symmetric key encryption standard using 256-bit encryption keys. AWS Storage Gateway only uploads data that has changed, minimizing the amount of data sent over the Internet.

Database

AWS provides a number of database solutions for developers and businesses, from managed relational and NoSQL database services to in-memory caching as a service and a petabyte-scale data warehouse service.

Amazon DynamoDB Security

Amazon DynamoDB is a managed NoSQL database service that provides fast and predictable performance with seamless scalability. Amazon DynamoDB enables you to offload the administrative burdens of operating and scaling distributed databases to AWS, so you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.

You can create a database table that can store and retrieve any amount of data and serve any level of request traffic. Amazon DynamoDB automatically spreads the data and traffic for the table over a sufficient number of servers to handle the request capacity you specified and the amount of data stored, while maintaining consistent, fast performance. All data items are stored on Solid State Drives (SSDs) and are automatically replicated across multiple Availability Zones in a region to provide built-in high availability and data durability.

You can set up automatic backups using a special template in AWS Data Pipeline that was created just for copying Amazon DynamoDB tables. You can choose full or incremental backups to a table in the same region or a different region. You can use the copy for disaster recovery in the event that an error in your code damages the original table or to federate Amazon DynamoDB data across regions to support a multi-region application.

To control who can use the Amazon DynamoDB resources and API, you set up permissions in IAM. In addition to controlling access at the resource level with IAM, you can also control access at the database level. You can create database-level permissions that allow or deny access to items (rows) and attributes (columns) based on the needs of your application. These database-level permissions are called fine-grained access controls, and you create them using an IAM policy that specifies under what circumstances a user or application can access an Amazon DynamoDB table. The IAM policy can restrict access to individual items in a table, access to the attributes in those items, or both at the same time.

In addition to requiring database and user permissions, each request to Amazon DynamoDB must contain a valid HMAC-SHA-256 signature or the request is rejected. The AWS SDKs automatically sign your requests; however, if you want to write your own HTTP POST requests, you must provide the signature in the header of your request to Amazon DynamoDB. To calculate the signature, you must request temporary security credentials from the AWS Security Token Service (AWS STS). Use the temporary security credentials to sign your requests to Amazon DynamoDB. Amazon DynamoDB is accessible via SSL-encrypted endpoints, and the encrypted endpoints are accessible from both the Internet and from within Amazon EC2.

Amazon Relational Database Service (Amazon RDS) Security

Amazon RDS allows you to create a relational database instance (DB instance) quickly and flexibly scale the associated compute resources and storage capacity to meet application demand. Amazon RDS manages the DB instance on your behalf by performing backups, handling failover, and maintaining the database software. As of the time of this writing, Amazon RDS is available for MySQL, Oracle, Microsoft SQL Server, MariaDB, Amazon Aurora, and PostgreSQL database engines.

Amazon RDS has multiple features that enhance reliability for critical production databases, including DB security groups, permissions, SSL connections, automated backups, DB snapshots, and multiple Availability Zone (Multi-AZ) deployments. DB instances can also be deployed in an Amazon VPC for additional network isolation.

Access control When you first create a DB instance in Amazon RDS, you will create a master user account, which is used only within the context of Amazon RDS to control access to your DB instance(s). The master user account is a native database user account that allows you to log on to your DB instance with all database privileges. You can specify the master user name and password you want associated with each DB instance when you create the DB instance. Once you have created your DB instance, you can connect to the database using the master user credentials. Subsequently, you can create additional user accounts so that you can restrict who can access your DB instance.

You can control Amazon RDS DB instance access via DB security groups, which are similar to Amazon EC2 security groups but are not interchangeable. DB security groups act like a firewall controlling network access to your DB instance. DB security groups default to deny all access mode, and customers must specifically authorize network ingress. There are two ways of doing this:

  • Authorizing a network IP range
  • Authorizing an existing Amazon EC2 security group

DB security groups only allow access to the database server port (all others are blocked) and can be updated without restarting the Amazon RDS DB instance.

Using IAM, you can further control access to your Amazon RDS DB instances. IAM enables you to control what Amazon RDS operations each individual IAM user has permission to call.

Network isolation For additional network access control, you can run your DB instances in an Amazon VPC. Amazon VPC enables you to isolate your DB instances by specifying the IP range that you want to use and connect to your existing IT infrastructure through an industry-standard encrypted IPsec VPN. Running Amazon RDS in an Amazon VPC enables you to have a DB instance within a private subnet. You can also set up a virtual private gateway that extends your corporate network into your Amazon VPC and allows access to the Amazon RDS DB instance in that Amazon VPC.

For Multi-AZ deployments, defining a subnet for all Availability Zones in a region will allow Amazon RDS to create a new standby in another Availability Zone should the need arise. You can create DB subnet groups, which are collections of subnets that you may want to designate for your Amazon RDS DB instances in an Amazon VPC. Each DB subnet group should have at least one subnet for every Availability Zone in a given region. In this case, when you create a DB instance in an Amazon VPC, you select a DB subnet group. Amazon RDS then uses that DB subnet group and your preferred Availability Zone to select a subnet and an IP address within that subnet. Amazon RDS creates and associates an elastic network interface to your DB instance with that IP address.

DB instances deployed within an Amazon VPC can be accessed from the Internet or from Amazon EC2 instances outside of the Amazon VPC via VPN or bastion hosts that you can launch in your public subnet. To use a bastion host, you need to set up a public subnet with an Amazon EC2 instance that acts as an SSH bastion. This public subnet must have an Internet gateway and routing rules that allow traffic to be directed via the SSH host, which must then forward requests to the private IP address of your Amazon RDS DB instance.

DB security groups can be used to help secure DB instances within an Amazon VPC. In addition, network traffic entering and exiting each subnet can be allowed or denied via network ACLs. All network traffic entering or exiting your Amazon VPC via your IPsec VPN connection can be inspected by your on-premises security infrastructure, including network firewalls and intrusion detection systems.

Encryption You can encrypt connections between your application and your DB instance using SSL. For MySQL and SQL Server, Amazon RDS creates an SSL certificate and installs the certificate on the DB instance when the instance is provisioned. For MySQL, you launch the MySQL client using the --ssl_ca parameter to reference the public key in order to encrypt connections. For SQL Server, download the public key and import the certificate into your Windows operating system. Oracle RDS uses Oracle-native network encryption with a DB instance. You simply add the native network encryption option to an option group and associate that option group with the DB instance. Once an encrypted connection is established, data transferred between the DB instance and your application will be encrypted during transfer. You can also require your DB instance to accept only encrypted connections.

Amazon RDS supports Transparent Data Encryption (TDE) for SQL Server (SQL Server Enterprise Edition) and Oracle (part of the Oracle Advanced Security option available in Oracle Enterprise Edition). The TDE feature automatically encrypts data before it is written to storage and automatically decrypts data when it is read from storage. If you require your MySQL data to be encrypted while at rest in the database, your application must manage the encryption and decryption of data.

Note that SSL support in Amazon RDS is for encrypting the connection between your application and your DB instance; it should not be relied on for authenticating the DB instance itself. Although SSL offers security benefits, be aware that SSL encryption is a compute-intensive operation and will increase the latency of your database connection.

Automated backups and DB snapshots Amazon RDS provides two different methods for backing up and restoring your DB instances: automated backups and DB snapshots. Turned on by default, the automated backup feature of Amazon RDS enables point-in-time recovery for your DB instances. Amazon RDS will back up your database and transaction logs and store both for a user-specified retention period. This allows you to restore a DB instance to any second during your retention period, up to the last five minutes. Your automatic backup retention period can be configured to up to 35 days.

DB snapshots are user-initiated backups of your DB instances. These full database backups are stored by Amazon RDS until you explicitly delete them. You can copy DB snapshots of any size and move them between any of AWS public regions or copy the same snapshot to multiple regions simultaneously. You can then create a new DB instance from a DB snapshot whenever you desire.

During the backup window, storage I/O may be suspended while your data is being backed up. This I/O suspension typically lasts a few minutes. This I/O suspension is avoided with Multi-AZ deployments, because the backup is taken from the standby.

DB instance replication AWS Cloud resources are housed in highly available datacenter facilities in different regions of the world, and each region contains multiple distinct locations called Availability Zones. Each Availability Zone is engineered to be isolated from failures in other Availability Zones and to provide inexpensive, low-latency network connectivity to other Availability Zones in the same region.

To architect for high availability of your Oracle, PostgreSQL, or MySQL databases, you can run your Amazon RDS DB instances in several Availability Zones, an option called a Multi-AZ deployment. When you select this option, AWS automatically provisions and maintains a synchronous standby replica of your DB instances in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to the standby replica. In the event of DB instance or Availability Zone failure, Amazon RDS will automatically failover to the standby so that database operations can resume quickly without administrative intervention.

For customers who use MySQL and need to scale beyond the capacity constraints of a single DB instance for read-heavy database workloads, Amazon RDS provides a read replica option. Once you create a read replica, database updates on the source DB instance are replicated to the read replica using MySQL’s native, asynchronous replication. You can create multiple read replicas for a given source DB instance and distribute your application’s read traffic among them. Read replicas can be created with Multi-AZ deployments to gain read scaling benefits in addition to the enhanced database write availability and data durability provided by Multi-AZ deployments.

Automatic software patching Amazon RDS will make sure that the relational database software powering your deployment stays up to date with the latest patches. When necessary, patches are applied during a maintenance window that you can control. You can think of the Amazon RDS maintenance window as an opportunity to control when DB instance modifications (such as scaling DB instance class) and software patching occur, in the event that either are requested or required. If a maintenance event is scheduled for a given week, it will be initiated and completed at some point during the 30-minute maintenance window that you identify.

The only maintenance events that require Amazon RDS to take your DB instances offline are scale compute operations (which generally take only a few minutes from start to finish) or required software patching. Required patching is automatically scheduled only for patches that are related to security and durability. Such patching occurs infrequently (typically once every few months) and should seldom require more than a fraction of your maintenance window. If you do not specify a preferred weekly maintenance window when creating a DB instance, a 30-minute default value is assigned. If you want to modify when maintenance is performed on your behalf, you can do so by modifying a DB instance in the AWS Management Console or by using the ModifyDBInstance API. Each DB instance can have different preferred maintenance windows, if you so choose.

Running DB instances in a Multi-AZ deployment can further reduce the impact of a maintenance event, as Amazon RDS will conduct maintenance via the following steps:

  1. Perform maintenance on standby.
  2. Promote standby to primary.
  3. Perform maintenance on old primary, which becomes the new standby.

When an Amazon RDS DB instance deletion API (DeleteDBInstance) is run, the DB instance is marked for deletion. Once the instance no longer indicates deleting status, it has been removed. At this point, the instance is no longer accessible and, unless a final snapshot copy was asked for, it cannot be restored and will not be listed by any of the tools or APIs.

Amazon Redshift Security

Amazon Redshift is a petabyte-scale SQL data warehouse service that runs on highly optimized and managed AWS compute and storage resources. The service has been architected not only to scale up or down rapidly, but also to improve query speeds significantly, even on extremely large datasets. To increase performance, Amazon Redshift uses techniques such as columnar storage, data compression, and zone maps to reduce the amount of I/O needed to perform queries. It also has a Massively Parallel Processing (MPP) architecture, which parallelizes and distributes SQL operations to take advantage of all available resources.

Cluster access By default, clusters that you create are closed to everyone. Amazon Redshift enables you to configure firewall rules (security groups) to control network access to your data warehouse cluster. You can also run Amazon Redshift inside an Amazon VPC to isolate your data warehouse cluster in your own virtual network and connect it to your existing IT infrastructure using industry-standard encrypted IPsec VPN.

The AWS account that creates the cluster has full access to the cluster. Within your AWS account, you can use IAM to create user accounts and manage permissions for those accounts. By using IAM, you can grant different users permission to perform only the cluster operations that are necessary for their work. Like all databases, you must grant permission in Amazon Redshift at the database level in addition to granting access at the resource level. Database users are named user accounts that can connect to a database and are authenticated when they log in to Amazon Redshift. In Amazon Redshift, you grant database user permissions on a per-cluster basis instead of on a per-table basis. However, users can see data only in the table rows that were generated by their own activities; rows generated by other users are not visible to them.

The user who creates a database object is its owner. By default, only a super user or the owner of an object can query, modify, or grant permissions on the object. For users to use an object, you must grant the necessary permissions to the user or the group that contains the user. In addition, only the owner of an object can modify or delete it.

Data backups Amazon Redshift distributes your data across all compute nodes in a cluster. When you run a cluster with at least two compute nodes, data on each node will always be mirrored on disks on another node, reducing the risk of data loss. In addition, all data written to a node in your cluster is continuously backed up to Amazon S3 using snapshots. Amazon Redshift stores your snapshots for a user-defined period, which can be from 1 to 35 days. You can also take your own snapshots at any time; these snapshots leverage all existing system snapshots and are retained until you explicitly delete them.

Amazon Redshift continuously monitors the health of the cluster and automatically re-replicates data from failed drives and replaces nodes as necessary. All of this happens without any effort on your part, although you may see a slight performance degradation during the re-replication process.

You can use any system or user snapshot to restore your cluster using the AWS Management Console or the Amazon Redshift APIs. Your cluster is available as soon as the system metadata has been restored, and you can start running queries while user data is spooled down in the background.

Data encryption When creating a cluster, you can choose to encrypt it in order to provide additional protection for your data at rest. When you enable encryption in your cluster, Amazon Redshift stores all data in user-created tables in an encrypted format using hardware-accelerated AES-256 block encryption keys. This includes all data written to disk and any backups.

Amazon Redshift uses a four-tier, key-based architecture for encryption. These keys consist of data encryption keys, a database key, a cluster key, and a master key.

  • Data encryption keys encrypt data blocks in the cluster. Each data block is assigned a randomly generated AES-256 key. These keys are encrypted by using the database key for the cluster.
  • The database key encrypts data encryption keys in the cluster. The database key is a randomly generated AES-256 key. It is stored on disk in a separate network from the Amazon Redshift cluster and encrypted by a master key. Amazon Redshift passes the database key across a secure channel and keeps it in memory in the cluster.
  • The cluster key encrypts the database key for the Amazon Redshift cluster. You can use either AWS or an HSM to store the cluster key. HSMs provide direct control of key generation and management and make key management separate and distinct from the application and the database.
  • The master key encrypts the cluster key if it is stored in AWS. The master key encrypts the cluster-key-encrypted database key if the cluster key is stored in an HSM.

You can have Amazon Redshift rotate the encryption keys for your encrypted clusters at any time. As part of the rotation process, keys are also updated for all of the cluster’s automatic and manual snapshots. Note that enabling encryption in your cluster will impact performance, even though it is hardware accelerated.

Encryption also applies to backups. When you are restoring from an encrypted snapshot, the new cluster will be encrypted as well.

To encrypt your table load data files when you upload them to Amazon S3, you can use Amazon S3 SSE. When you load the data from Amazon S3, the COPY command will decrypt the data as it loads the table.

Database audit logging Amazon Redshift logs all SQL operations, including connection attempts, queries, and changes to your database. You can access these logs using SQL queries against system tables or choose to have them downloaded to a secure Amazon S3 bucket. You can then use these audit logs to monitor your cluster for security and troubleshooting purposes.

Automatic software patching Amazon Redshift manages all the work of setting up, operating, and scaling your data warehouse, including provisioning capacity, monitoring the cluster, and applying patches and upgrades to the Amazon Redshift engine. Patches are applied only during specified maintenance windows.

SSL connections To protect your data in transit within the AWS Cloud, Amazon Redshift uses hardware-accelerated SSL to communicate with Amazon S3 or Amazon DynamoDB for COPY, UNLOAD, backup, and restore operations. You can encrypt the connection between your client and the cluster by specifying SSL in the parameter group associated with the cluster. To have your clients also authenticate the Amazon Redshift server, you can install the public key (.pem file) for the SSL certificate on your client and use the key to connect to your clusters.

Amazon Redshift offers the newer, stronger cipher suites that use the Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) protocol. ECDHE allows SSL clients to provide perfect forward secrecy between the client and the Amazon Redshift cluster. Perfect forward secrecy uses session keys that are ephemeral and not stored anywhere, which prevents the decoding of captured data by unauthorized third parties, even if the secret long-term key itself is compromised. You do not need to configure anything in Amazon Redshift to enable ECDHE; if you connect from a SQL client tool that uses ECDHE to encrypt communication between the client and server, Amazon Redshift will use the provided cipher list to make the appropriate connection. For more information on Amazon Redshift, refer to Chapter 7, “Databases.”

Amazon ElastiCache Security

Amazon ElastiCache is a web service that makes it easy to set up, manage, and scale distributed in-memory cache environments in the cloud. The service improves the performance of web applications by allowing you to retrieve information from a fast, managed, in-memory caching system instead of relying entirely on slower disk-based databases. It can be used to improve latency and throughput significantly for many read-heavy application workloads (such as social networking, gaming, media sharing, and Q&A portals) or compute-intensive workloads (such as a recommendation engine). Caching improves application performance by storing critical pieces of data in memory for low-latency access. Cached information may include the results of I/O-intensive database queries or the results of computationally intensive calculations.

The Amazon ElastiCache service automates time-consuming management tasks for in-memory cache environments, such as patch management, failure detection, and recovery. It works in conjunction with other AWS Cloud services (such as Amazon EC2, Amazon CloudWatch, and Amazon Simple Notification Service [Amazon SNS]) to provide a secure, high-performance, and managed in-memory cache. For example, an application running in Amazon EC2 can securely access an Amazon ElastiCache cluster in the same region with very low latency.

Using the Amazon ElastiCache service, you create a cache cluster, which is a collection of one or more cache nodes, each running an instance of the Memcached service. A cache node is a fixed-size chunk of secure, network-attached RAM. Each cache node runs an instance of the Memcached service and has its own DNS name and port. Multiple types of cache nodes are supported, each with varying amounts of associated memory. A cache cluster can be set up with a specific number of cache nodes and a cache parameter group that controls the properties for each cache node. All cache nodes in a cache cluster are designed to be of the same node type and have the same parameter and security group settings.

Data access Amazon ElastiCache allows you to control access to your cache clusters using cache security groups. A cache security group acts like a firewall, controlling network access to your cache cluster. By default, network access is turned off to your cache clusters. If you want your applications to access your cache cluster, you must explicitly enable access from hosts in specific Amazon EC2 security groups. Once ingress rules are configured, the same rules apply to all cache clusters associated with that cache security group.

To allow network access to your cache cluster, create a cache security group and use the Authorize Cache Security Group Ingress API or AWS CLI command to authorize the desired Amazon EC2 security group (which in turn specifies the Amazon EC2 instances allowed). IP range-based access control is currently not enabled for cache clusters. All clients to a cache cluster must be within the Amazon EC2 network and authorized via cache security groups.

Amazon ElastiCache for Redis provides backup and restore functionality, where you can create a snapshot of your entire Redis cluster as it exists at a specific point in time. You can schedule automatic, recurring daily snapshots, or you can create a manual snapshot at any time. For automatic snapshots, you specify a retention period; manual snapshots are retained until you delete them. The snapshots are stored in Amazon S3 with high durability and can be used for warm starts, backups, and archiving.

Application Services

AWS offers a variety of managed services to use with your applications, including services that provide application streaming, queuing, push notification, email delivery, search, and transcoding.

Amazon Simple Queue Service (Amazon SQS) Security

Amazon Simple Queue Service (Amazon SQS) is a highly reliable, scalable message queuing service that enables asynchronous message-based communication between distributed components of an application. The components can be computers, Amazon EC2 instances, or a combination of both. With Amazon SQS, you can send any number of messages to an Amazon SQS queue at any time from any component. The messages can be retrieved from the same component or a different one, right away or at a later time (within 14 days). Messages are highly durable; each message is persistently stored in highly available, highly reliable queues. Multiple processes can read/write from/to an Amazon SQS queue at the same time without interfering with each other.

Data access Amazon SQS access is granted based on an AWS account or a user created with IAM. Once authenticated, the AWS account has full access to all user operations. An IAM user, however, only has access to the operations and queues for which they have been granted access via policy. By default, access to each individual queue is restricted to the AWS account that created it. However, you can allow other access to a queue, using either an Amazon SQS-generated policy or a policy you write.

Encryption Amazon SQS is accessible via SSL-encrypted endpoints. The encrypted endpoints are accessible from both the Internet and from within Amazon EC2. Data stored within Amazon SQS can be encrypted. Additionally, you can encrypt data before it is uploaded to Amazon SQS, provided that the application using the queue has a means to decrypt the message when it is retrieved. Encrypting messages within Amazon SQS helps protect against access to sensitive customer data by unauthorized persons, including AWS.

Amazon Simple Notification Service (Amazon SNS) Security

Amazon SNS is a web service that makes it easy to set up, operate, and send notifications from the cloud. It provides developers with a highly scalable, flexible, and cost-effective capability to publish messages from an application and immediately deliver them to subscribers or other applications. Amazon SNS provides a simple interface that can be used to create topics that customers want to notify applications (or people) about, subscribe clients to these topics, publish messages, and have these messages delivered over clients’ protocol of choice (such as, HTTP/HTTPS and email).

Amazon SNS delivers notifications to clients using a push mechanism that eliminates the need to check or poll for new information and updates periodically. Amazon SNS can be leveraged to build highly reliable, event-driven workflows and messaging applications without the need for complex middleware and application management. The potential uses for Amazon SNS include monitoring applications, workflow systems, time-sensitive information updates, mobile applications, and many others.

Data access Amazon SNS provides access control mechanisms so that topics and messages are secured against unauthorized access. Topic owners can set policies for a topic that restrict who can publish or subscribe to a topic. Additionally, topic owners can encrypt transmission by specifying that the delivery mechanism must be HTTPS. Amazon SNS access is granted based on an AWS account or a user created with IAM. Once authenticated, the AWS account has full access to all user operations. An IAM user, however, only has access to the operations and topics for which they have been granted access via policy. By default, access to each individual topic is restricted to the AWS account that created it. You can allow other access to Amazon SNS using either an Amazon SNS-generated policy or a policy you write.

Analytics Services

AWS provides cloud-based analytics services to help you process and analyze any volume of data, whether your need is for managed Hadoop clusters, real-time streaming data, petabyte-scale data warehousing, or orchestration.

Amazon EMR Security

Amazon EMR is a managed web service that you can use to run Hadoop clusters that process vast amounts of data by distributing the work and data among several servers. It uses an enhanced version of the Apache Hadoop framework running on the web-scale infrastructure of Amazon EC2 and Amazon S3. You simply upload your input data and a data processing application into Amazon S3. Amazon EMR then launches the number of Amazon EC2 instances you specify. The service begins the job flow execution while pulling the input data from Amazon S3 into the launched Amazon EC2 instances. Once the job flow is finished, Amazon EMR transfers the output data to Amazon S3, where you can then retrieve it or use it as input in another job flow.

When launching job flows on your behalf, Amazon EMR sets up two Amazon EC2 security groups: one for the master nodes and another for the slaves. The master security group has a port open for communication with the service. It also has the SSH port open to allow you to use SSH to connect into the instances using the key specified at startup. The slaves start in a separate security group, which only allows interaction with the master instance. By default, both security groups are set up to prohibit access from external sources, including Amazon EC2 instances belonging to other customers. Because these are security groups within your account, you can reconfigure them using the standard Amazon EC2 tools or dashboard. To protect customer input and output datasets, Amazon EMR transfers data to and from Amazon S3 using SSL.

Amazon EMR provides several ways to control access to the resources of your cluster. You can use IAM to create user accounts and roles and configure permissions that control which AWS features those users and roles can access. When you launch a cluster, you can associate an Amazon EC2 key pair with the cluster, which you can then use when you connect to the cluster using SSH. You can also set permissions that allow users other than the default Hadoop user to submit jobs to your cluster.

If an IAM user launches a cluster, that cluster is hidden from other IAM users on the AWS account by default. This filtering occurs on all Amazon EMR interfaces (the AWS Management Console, AWS CLI, API, and AWS SDKs) and helps prevent IAM users from accessing and inadvertently changing clusters created by other IAM users.

For an additional layer of protection, you can launch the Amazon EC2 instances of your Amazon EMR cluster into an Amazon VPC, which is like launching it into a private subnet. This allows you to control access to the entire subnet. You can also launch the cluster into an Amazon VPC and enable the cluster to access resources on your internal network using a VPN connection. You can encrypt the input data before you upload it to Amazon S3 using any common data encryption tool. If you do encrypt the data before it is uploaded, you then need to add a decryption step to the beginning of your job flow when Amazon EMR fetches the data from Amazon S3.

Amazon Kinesis Security

Amazon Kinesis is a managed service designed to handle real-time streaming of big data. It can accept virtually any amount of data, from any number of sources, scaling up and down as needed. You can use Amazon Kinesis in situations that call for large-scale, real-time data ingestion and processing, such as server logs, social media, market data feeds, and web clickstream data. Applications read and write data records to Amazon Kinesis in streams. You can create any number of Amazon Kinesis streams to capture, store, and transport data.

You can control logical access to Amazon Kinesis resources and management functions by creating users under your AWS account using IAM and controlling which Amazon Kinesis operations these users have permission to perform. To facilitate running your producer or consumer applications on an Amazon EC2 instance, you can configure that instance with an IAM role. That way, AWS credentials that reflect the permissions associated with the IAM role are made available to applications on the instance, which means that you don’t have to use your long-term AWS security credentials. Roles have the added benefit of providing temporary credentials that expire within a short timeframe, which adds a measure of protection.

The Amazon Kinesis API is only accessible via an SSL-encrypted endpoint (kinesis .us-east-1.amazonaws.com) to help ensure secure transmission of your data to AWS. You must connect to that endpoint to access Amazon Kinesis, but you can then use the API to direct Amazon Kinesis to create a stream in any AWS Region.

Deployment and Management Services

AWS provides a variety of tools to help with the deployment and management of your applications. This includes services that allow you to create individual user accounts with credentials for access to AWS Cloud services. It also includes services for creating and updating stacks of AWS resources, deploying applications on those resources, and monitoring the health of those AWS resources. Other tools help you manage cryptographic keys using HSMs and log API activity for security and compliance purposes.

AWS Identity and Access Management (IAM) Security

As discussed previously, IAM allows you to create multiple users and manage the permissions for each of these users within your AWS account. A user is an identity (within an AWS account) with unique security credentials that can be used to access AWS Cloud services. Thus IAM eliminates the need to share passwords or keys and makes it easy to enable or disable a user’s access as appropriate. IAM is integrated with AWS CloudFormation. More information on AWS CloudFormation can be found in Chapter 8.

IAM enables you to minimize the use of your AWS account credentials. Once you create IAM user accounts, all interactions with AWS Cloud services and resources should occur with IAM user security credentials.

Roles As discussed earlier in this chapter, an IAM role uses temporary security credentials to allow you to delegate access to users or services that normally do not have access to your AWS resources. A role is a set of permissions used to access specific AWS resources, but these permissions are not tied to a specific IAM user or group. An authorized entity (for example, a mobile user or Amazon EC2 instance) assumes a role and receives temporary security credentials for authenticating to the resources defined in the role. Temporary security credentials provide enhanced security due to their short lifespan (the default expiration is 12 hours) and the fact that they cannot be reused after they expire. This can be particularly useful in providing limited, controlled access in certain situations.

  • Federated (Non-AWS) User Access Federated users are users (or applications) that do not have AWS accounts. With roles, you can give them access to your AWS resources for a limited amount of time. This is useful if you have non-AWS users who you can authenticate with an external service, such as Microsoft Active Directory, Lightweight Directory Access Protocol (LDAP), or Kerberos. The temporary AWS credentials used with the roles provide identity federation between AWS and your non-AWS users in your corporate identity and authorization system.
  • Security Assertion Markup Language (SAML) 2.0 If your organization supports Security Assertion Markup Language (SAML) 2.0, you can create trust between your organization as an Identity Provider (IdP) and other organizations as service providers. In AWS, you can configure AWS as the service provider, and use SAML to provide your users with federated Single-Sign On (SSO) to the AWS Management Console or to get federated access to call AWS APIs.
  • Roles are also useful if you create a mobile or web-based application that accesses AWS resources. AWS resources require security credentials for programmatic requests; however, you shouldn’t embed long-term security credentials in your application because they are accessible to the application’s users and can be difficult to rotate. Instead, you can let users sign in to your application using Login with Amazon, Facebook, or Google, and then use their authentication information to assume a role and get temporary security credentials.
  • Cross-account access For organizations that use multiple AWS accounts to manage their resources, you can set up roles to provide users who have permissions in one account to access resources under another account. For organizations that have personnel who only rarely need access to resources under another account, using roles helps to ensure that credentials are provided temporarily and only as needed.
  • Applications running on Amazon EC2 instances that need to access AWS resources If an application runs on an Amazon EC2 instance and needs to make requests for AWS resources (such as Amazon S3 buckets, Amazon DynamoDB tables), it must have security credentials. Using roles instead of creating individual IAM accounts for each application on each instance can save significant time for customers who manage a large number of instances or an elastically scaling fleet using Auto Scaling.
  • The temporary credentials include a security token, an access key ID, and a secret access key. To give a user access to certain resources, you distribute the temporary security credentials to the user to whom you are granting temporary access. When the user makes calls to your resources, the user passes in the token and access key ID and signs the request with the secret access key. The token will not work with different access keys.
  • The use of temporary credentials provides additional protection for you because you do not have to manage or distribute long-term credentials to temporary users. In addition, the temporary credentials get automatically loaded to the target instance so you don’t have to embed them somewhere unsafe like your code. Temporary credentials are automatically rotated or changed multiple times a day without any action on your part and are stored securely by default.

Mobile Services

AWS mobile services make it easier for you to build, ship, run, monitor, optimize, and scale cloud-powered applications for mobile devices. These services also help you authenticate users to your mobile application, synchronize data, and collect and analyze application usage.

Amazon Cognito Security

Amazon Cognito provides identity and sync services for mobile and web-based applications. It simplifies the task of authenticating users and storing, managing, and syncing their data across multiple devices, platforms, and applications. It provides temporary, limited-privilege credentials for both authenticated and unauthenticated users without having to manage any back-end infrastructure.

Amazon Cognito works with well-known identity providers like Google, Facebook, and Amazon to authenticate end users of your mobile and web applications. You can take advantage of the identification and authorization features provided by these services instead of having to build and maintain your own. Your application authenticates with one of these identity providers using the provider’s SDK. Once the end user is authenticated with the provider, an OAuth or OpenID Connect token returned from the provider is passed by your application to Amazon Cognito, which returns a new Amazon Cognito ID for the user and a set of temporary, limited-privilege AWS credentials.

To begin using Amazon Cognito, you create an identity pool through the Amazon Cognito console. The identity pool is a store of user identity information that is specific to your AWS account. During the creation of the identity pool, you will be asked to create a new IAM role or pick an existing one for your end users. An IAM role is a set of permissions used to access specific AWS resources, but these permissions are not tied to a specific IAM user or group. An authorized entity (for example, mobile user, Amazon EC2 instance) assumes a role and receives temporary security credentials for authenticating to the AWS resources defined in the role. Temporary security credentials provide enhanced security due to their short lifespan (the default expiration is 12 hours) and the fact that they cannot be reused after they expire.

The role you select has an effect on which AWS Cloud services your end users will be able to access with the temporary credentials. By default, Amazon Cognito creates a new role with limited permissions; end users only have access to the Amazon Cognito Sync service and Amazon Mobile Analytics. If your application needs access to other AWS resources, such as Amazon S3 or Amazon DynamoDB, you can modify your roles directly from the IAM console.

With Amazon Cognito, there is no need to create individual AWS accounts or even IAM accounts for every one of your web/mobile application end users who will need to access your AWS resources. In conjunction with IAM roles, mobile users can securely access AWS resources and application features and even save data to the AWS Cloud without having to create an account or log in. If they choose to create an account or log in later, Amazon Cognito will merge data and identification information.

Because Amazon Cognito stores data locally and in the service, your end users can continue to interact with their data even when they are offline. Their offline data may be stale, but they can immediately retrieve anything they put into the dataset whether or not they are online. The client SDK manages a local SQLite store so that the application can work even when it is not connected. The SQLite store functions as a cache and is the target of all the read and write operations. Amazon Cognito’s sync facility compares the local version of the data to the cloud version and pushes up or pulls down deltas as needed. Note that in order to sync data across devices, your identity pool must support authenticated identities. Unauthenticated identities are tied to the device, so unless an end user authenticates, no data can be synced across multiple devices.

With Amazon Cognito, your application communicates directly with a supported public identity provider (Amazon, Facebook, or Google) to authenticate users. Amazon Cognito does not receive or store user credentials, only the OAuth or OpenID Connect token received from the identity provider. Once Amazon Cognito receives the token, it returns a new Amazon Cognito ID for the user and a set of temporary, limited-privilege AWS credentials. Each Amazon Cognito identity has access only to its own data in the sync store, and this data is encrypted when stored. In addition, all identity data is transmitted over HTTPS. The unique Amazon Cognito identifier on the device is stored in the appropriate secure location. For example, on iOS the Amazon Cognito identifier is stored in the iOS keychain, and user data is cached in a local SQLite database in the application’s sandbox. If you require additional security, you can encrypt this identity data in the local cache by implementing encryption in your application.

Applications

AWS applications are managed services that enable you to provide your users with secure, centralized storage and work areas in the cloud.

Amazon WorkSpaces Security

Amazon WorkSpaces is a managed desktop service that allows you to provision cloud-based desktops quickly for your users. Simply choose a Windows 7 or Windows 10 bundle that best meets the needs of your users and the number of WorkSpaces that you would like to launch. Once the WorkSpaces are ready, users receive an email informing them where they can download the relevant client and log in to their WorkSpace. They can then access their cloud-based desktops from a variety of endpoint devices, including PCs, laptops, and mobile devices. However, your organization’s data is never sent to or stored on the end-user device because Amazon WorkSpaces uses PC-over-IP (PCoIP), which provides an interactive video stream without transmitting actual data. The PCoIP protocol compresses, encrypts, and encodes the user’s desktop computing experience and transmits as pixels only across any standard IP network to end-user devices.

In order to access their Amazon WorkSpaces, users must sign in using a set of unique credentials or their regular Active Directory credentials. When you integrate Amazon WorkSpaces into your corporate Active Directory, each WorkSpace joins your Active Directory domain and can be managed just like any other desktop in your organization. This means that you can use Active Directory group policies to manage your users’ WorkSpaces to specify configuration options that control their desktops. If you choose not to use Active Directory or another type of on-premises directory to manage your users’ WorkSpaces, you can create a private cloud directory in Amazon WorkSpaces that you can use for administration.

To provide an additional layer of security, you can also require the use of MFA upon sign-in, which is in the form of a hardware or software token. Amazon WorkSpaces supports MFA using an on-premises Remote Authentication Dial In User Service (RADIUS) server or any security provider that supports RADIUS authentication. It currently supports the Password Authentication Protocol (PAP), Challenge-Handshake Authentication Protocol (CHAP), Microsoft CHAP (MS-CHAP) 1, and MS-CHAP2 protocols, along with RADIUS proxies.

Each WorkSpace resides on its own Amazon EC2 instance within an Amazon VPC. You can create WorkSpaces in an Amazon VPC you already own or have the Amazon WorkSpaces service create one for you automatically using the Amazon WorkSpaces Quick Start option. When you use the Quick Start option, Amazon WorkSpaces not only creates the Amazon VPC, but it also performs several other provisioning and configuration tasks for you, such as creating an Internet gateway for the Amazon VPC, setting up a directory in the Amazon VPC that is used to store user and WorkSpace information, creating a directory administrator account, creating the specified user accounts and adding them to the directory, and creating the Amazon WorkSpaces instances. The Amazon VPC can also be connected to an on-premises network using a secure VPN connection to allow access to an existing on-premises Active Directory and other intranet resources. You can add a security group that you create in your Amazon VPC to all of the WorkSpaces that belong to your Active Directory. This allows you to control network access from Amazon WorkSpaces in your Amazon VPC to other resources in your Amazon VPC and on-premises network.

Persistent storage for Amazon WorkSpaces is provided by Amazon EBS and is automatically backed up twice a day to Amazon S3. If Amazon WorkSpaces Sync is enabled on a WorkSpace, the folder a user chooses to sync will be continuously backed up and stored in Amazon S3. You can also use Amazon WorkSpaces Sync on a Mac or PC to sync documents to or from your WorkSpace so that you can always have access to your data regardless of the desktop computer you are using.

Because Amazon WorkSpaces is a managed service, AWS takes care of several security and maintenance tasks like daily backups and patching. Updates are delivered automatically to your WorkSpaces during a weekly maintenance window. You can control how patching is configured for a user’s WorkSpace. By default, Windows Update is turned on, but you have the ability to customize these settings or use an alternative patch management approach if you desire. For the underlying operating system, Windows Update is enabled by default on Amazon WorkSpaces and configured to install updates on a weekly basis. You can use an alternative patching approach or configure Windows Update to perform updates at a time of your choosing. You can use IAM to control who on your team can perform administrative functions like creating or deleting WorkSpaces or setting up user directories. You can also set up a WorkSpace for directory administration, install your favorite Active Directory administration tools, and create organizational units and Group Policies in order to apply Active Directory changes more easily for all of your Amazon WorkSpaces users.

Summary

In this chapter, you learned that the first priority at AWS is the security of the cloud. Security within AWS is based on a “defense in depth” model where no one, single element is used to secure systems on AWS. Rather, AWS uses a multitude of elements—each acting at different layers of a system—in total to secure the system. As you learned in the shared responsibility model, AWS is responsible for some layers of this model, and you are responsible for others. AWS also offers security tools and features of services for customers to use at their discretion. These concepts, tools, and features were discussed in this chapter.

Exam Essentials

Understand the shared responsibility model. AWS is responsible for securing the underlying infrastructure that supports the cloud, and you are responsible for anything you put on the cloud or connect to the cloud.

Understand regions and Availability Zones. Each region is completely independent. Each region is designed to be completely isolated from the other regions. This achieves the greatest possible fault tolerance and stability. Regions are collections of Availability Zones. Each Availability Zone is isolated, but the Availability Zones in a region are connected through low-latency links.

Understand high-availability system design within AWS. You should architect your AWS usage to take advantage of multiple regions and Availability Zones. Distributing applications across multiple Availability Zones provides the ability to remain resilient in the face of most failure modes, including natural disasters or system failures.

Understand the network security of AWS. Network devices, including firewall and other boundary devices, are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network. These boundary devices employ rule sets, ACLs, and configurations to enforce the flow of information to specific information system services.

AWS has strategically placed a limited number of access points to the cloud to allow for a more comprehensive monitoring of inbound and outbound communications and network traffic. These customer access points are called API endpoints, and they allow HTTPS access, which lets you establish a secure communication session with your storage or compute instances within AWS.

Amazon EC2 instances cannot send spoofed network traffic. The AWS-controlled, host-based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.

Unauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy. Violations of the AWS Acceptable Use Policy are taken seriously, and every reported violation is investigated.

It is not possible for an Amazon EC2 instance running in promiscuous mode to receive or sniff traffic that is intended for a different virtual instance.

Understand the use of credentials on AWS. AWS employs several credentials in order to positively identify a user or authorize an API call to the platform. Credentials include:

  • Passwords
  • AWS root account or IAM user account login to the AWS Management Console
  • MFA
  • Access keys
  • Digitally signed requests to AWS APIs (using the AWS SDK, AWS CLI, or REST/Query APIs)

Understand the proper use of access keys. Because access keys can be misused if they fall into the wrong hands, AWS encourages you to save them in a safe place and not to embed them in your code. For customers with large fleets of elastically-scaling Amazon EC2 instances, the use of IAM roles can be a more secure and convenient way to manage the distribution of access keys.

Understand the value of AWS CloudTrail. AWS CloudTrail is a web service that records API calls made on your account and delivers log files to your Amazon S3 bucket. AWS CloudTrail’s benefit is visibility into account activity by recording API calls made on your account.

Understand the security features of Amazon EC2. Amazon EC2 uses public-key cryptography to encrypt and decrypt login information. Public-key cryptography uses a public key to encrypt a piece of data, such as a password, and then the recipient uses the private key to decrypt the data. The public and private keys are known as a key pair.

To log in to your instance, you must create a key pair, specify the name of the key pair when you launch the instance, and provide the private key when you connect to the instance. Linux instances have no password, and you use a key pair to log in using SSH. With Windows instances, you use a key pair to obtain the administrator password and then log in using RDP.

A security group acts as a virtual firewall that controls the traffic for one or more instances. When you launch an instance, you associate one or more security groups with the instance. You add rules to each security group that allow traffic to or from its associated instances. You can modify the rules for a security group at any time; the new rules are automatically applied to all instances that are associated with the security group.

Understand AWS use of encryption of data in transit. All service endpoints support encryption of data in transit via HTTPS.

Know which services offer encryption of data at rest as a feature. The following services offer a feature to encrypt data at rest:

  • Amazon S3
  • Amazon EBS
  • Amazon EMR
  • AWS Snowball
  • Amazon Glacier
  • AWS Storage Gateway
  • Amazon RDS
  • Amazon Redshift
  • Amazon WorkSpaces

Exercises

By now you have set up an account in AWS. If you haven’t already, now would be the time to do so. It is important to note that these exercises are in your AWS account and thus are not free.

Use the Free Tier when launching resources. The AWS Free Tier applies to participating services across the following AWS Regions: US East (Northern Virginia), US West (Oregon), US West (Northern California), Canada (Central), EU (London), EU (Ireland), EU (Frankfurt), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), and South America (Sao Paulo). For more information, see https://aws.amazon.com/s/dm/optimization/server-side-test/free-tier/free_np/.

These exercises assume that you have installed the AWS Command Line Utilities. Refer to Chapter 2 Exercise 2.1 (Linux) or Exercise 2.2 (Windows) if you need to install the AWS Command Line Interface (AWS CLI).

The reference for the AWS Command Line Interface can be found at: http://docs.aws.amazon.com/cli/latest/reference/.

These exercises will create AWS Identity and Access Management Users, Groups, Policies, and Roles. To complete the exercises, you research the “how to”. By looking up how to perform a specific task, you will be on your way to mastering the task. The goal of this book isn’t just to prepare you to pass the AWS Certified SysOps Administrator – Associate exam, rather this book should serve as a reference companion in your day-to-day duties as an AWS Certified SysOps Administrator.

In Exercise 3.1, you will create three IAM users. There are two ways to accomplish this task:

  1. AWS Management Console (Firefox or Chrome browsers)
  2. The AWS CLI command

You will use the AWS CLI for Exercises 3.1, 3.2, and 3.3.






Review Questions

  1. Whose responsibility is it to secure the AWS Cloud?

    1. Only Amazon Web Services
    2. Only you
    3. The World Wide Web Consortium (W3C)
    4. You and AWS share the responsibility.
  2. For which aspects of physical and environmental security is Amazon Web Services responsible?

    1. Fire detection and suppression
    2. Power redundancy
    3. Climate and temperature control in AWS datacenters
    4. All of the above
  3. True or False: The AWS network provides protection against traditional network security issues.

    1. True
    2. False
  4. Which AWS service provides centralized management of access and authentication of users administering the services in an AWS account?

    1. AWS Directory Service
    2. AWS Identity and Access Management Service
    3. Amazon Cognito
    4. AWS Config
  5. Which credentials can an IAM user have in order to access AWS services via the AWS Management Console and the AWS Command Line Interface (AWS CLI)? (Choose two.)

    1. Key pair
    2. User name and password
    3. Email address and password
    4. Access keys
  6. True or False: A password policy can be set in IAM that requires at least two lowercase letters and at least two non-alphanumeric characters.

    1. True
    2. False
  7. The IAM access keys used to access AWS services via the AWS Command Line Interface (AWS CLI) and/or AWS Software Development Kits (SDK) consist of which two parts?

    1. Access Key ID and password
    2. Public Access Key and Secret Access Key
    3. Access Key ID and Secret Access Key
    4. User name and Public Access Key
  8. Which Multi-Factor Authentication devices does the IAM service support?

    1. Hardware devices (Gemalto)
    2. Virtual MFA applications (for example, Google Authenticator)
    3. Simple Message Service (SMS) (via mobile devices)
    4. All of the above
  9. Which of the following is true when using AWS Identity and Access Management groups?

    1. IAM users are members of a default user group.
    2. Groups can be nested.
    3. An IAM user can be a member of multiple groups.
    4. IAM roles can be members of a group.
  10. Which of the following is not a best practice for securing an AWS account?

    1. Requiring Multi-Factor Authentication for root-level access
    2. Creating individual IAM users
    3. Monitoring activity on your AWS account
    4. Sharing credentials to provide cross-account access
  11. Which of the following is true when using AWS Key Management Service (AWS KMS)?

    1. All API requests to AWS KMS must be made over HTTP.
    2. Use of keys is protected by access control policies defined and managed by you.
    3. An individual AWS employee can access a Customer Master Key (CMK) and export the CMK in plaintext.
    4. An AWS KMS key can be used globally in any AWS Region.
  12. The AWS CloudTrail service provides which of the following?

    1. Logs of the API requests for AWS resources within your account
    2. Information about the IP traffic going to and from network interfaces
    3. Monitoring of the utilization of AWS resources within your account
    4. Information on configuration changes to AWS resources within your AWS account
  13. Amazon CloudWatch Logs enable Amazon CloudWatch to monitor log files. Pattern filtering can be used to analyze the logs and trigger Amazon CloudWatch alarms based on customer specified thresholds. Which types of log files can be sent to Amazon CloudWatch Logs?

    1. Operating system logs
    2. AWS CloudTrail Logs
    3. Access Flow Logs
    4. All of the above
  14. AWS CloudTrail logs the API requests to AWS resources within your account. Which other AWS service can be used in conjunction with CloudTrail to capture information about changes made to AWS resources in your AWS account?

    1. Auto Scaling
    2. AWS Config
    3. Amazon VPC Flow Logs
    4. AWS Artifact
  15. True or false: Amazon Inspector continuously monitors your AWS account’s configuration against the Well Architected Framework’s best practice recommendations for security.

    1. True
    2. False
  16. A workload consisting of Amazon EC2 instances is placed in an Amazon VPC. What feature of VPC can be used to deny network traffic based on IP source address and port number?

    1. Subnets
    2. Security groups
    3. Route tables
    4. Network Access Control Lists
  17. You want to pass traffic securely from your on-premises network to resources in your Amazon VPC. Which type of gateway can be used on the VPC?

    1. Internet Gateway (IGW)
    2. Amazon Virtual Private Cloud endpoint
    3. Virtual Private Gateway
    4. Amazon Virtual Private Cloud peer
  18. To protect data at rest within Amazon DynamoDB, customers can use which of the following?

    1. Client-side encryption
    2. TLS connections
    3. Server-side encryption provided by the Amazon DynamoDB service
    4. Fine-grained access controls
  19. When an Amazon Relational Database Service database instance is run within an Amazon Virtual Private Cloud, which Amazon VPC security features can be used to protect the database instance?

    1. Security groups
    2. Network ACLs
    3. Private subnets
    4. All of the above
  20. Which of the following is correct?

    1. Amazon SQS and Amazon SNS encrypt data at rest.
    2. Amazon SQS and Amazon SNS do not encrypt data at rest.
    3. Amazon SQS encrypts data at rest and Amazon SNS does not encrypt data at rest.
    4. Amazon SQS does not encrypt data at rest and Amazon SNS encrypts data at rest.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset