This chapter covers the following topics:
• Cloud Computing and the Cloud Service Models
• Cloud Security Responsibility Models
• DevOps, Continuous Integration (CI), Continuous Delivery (CD), and DevSecOps
• Understanding the Different Cloud Security Threats
Everyone uses cloud computing today. Many organizations have moved numerous applications to the cloud, and their employees use services offered by many cloud providers, such as Google Cloud Platform, Amazon Web Services (AWS), Microsoft Azure, and others. In this chapter, you learn the different cloud computing service models and the security responsibilities of the cloud provider and consumer of each model. You also learn about DevOps, Continuous Integration (CI), Continuous Delivery (CD), and DevSecOps. At the end of the chapter, you gain an understanding of the different cloud security threats in today’s environment.
The “Do I Know This Already?” quiz allows you to assess whether you should read this entire chapter thoroughly or jump to the “Exam Preparation Tasks” section. If you are in doubt about your answers to these questions or your own assessment of your knowledge of the topics, read the entire chapter. Table 2-1 lists the major headings in this chapter and their corresponding “Do I Know This Already?” quiz questions. You can find the answers in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.”
Caution
The goal of self-assessment is to gauge your mastery of the topics in this chapter. If you do not know the answer to a question or are only partially sure of the answer, you should mark that question as wrong for purposes of the self-assessment. Giving yourself credit for an answer you incorrectly guess skews your self-assessment results and might provide you with a false sense of security.
1. Which of the following is a reason why organizations are moving to the cloud?
a. To transition from operational expenditure (OpEx) to capital expenditure (CapEx)
b. To transition from capital expenditure (CapEx) to operational expenditure (OpEx)
c. Because of the many incompatibility issues in security technologies
d. None of these answers are correct.
2. Which of the following is a type of cloud model composed of two or more clouds or cloud services (including on-premises services or private clouds and public clouds)?
a. IaaS
b. Hybrid cloud
c. Community cloud
d. None of these answers are correct.
3. Which of the following is the cloud service model of Cisco WebEx and Office 365?
a. SaaS
b. PaaS
c. Serverless computing
d. IaaS
4. Which of the following development methodologies uses Scrum?
a. Agile
b. Waterfall
c. Service Iteration
d. None of these answers are correct.
5. Which of the following development methodologies includes a feedback loop to prevent problems from happening again (enabling faster detection and recovery by seeing problems as they occur and maximizing opportunities to learn and improve), as well as continuous experimentation and learning?
a. Pipelines
b. Waterfall
c. DevOps
d. None of these answers are correct.
6. AWS Lambda is an example of “serverless" computing. Serverless does not mean that you do not need a server somewhere. Instead, it means that you will be using which of the following to host and develop your code?
a. Agile
b. Fuzzers
c. Eclipse
d. Cloud platforms
7. The cloud security shared responsibility depends on the type of cloud model (SaaS, PaaS, or IaaS). In which of the following cloud service models is the cloud consumer (customer) responsible for the security and patching of the applications, but not the underlying operating system, virtual machines, storage, and virtual networks?
a. PaaS
b. SaaS
c. IaaS
d. None of these answers are correct.
8. Insufficient due diligence is one of the biggest issues when moving to the cloud. Security professionals must verify that which of the following issues are in place and discussed with the cloud provider?
a. Encryption
b. Data classification
c. Incident response
d. All of these answers are correct.
9. Which of the following is an input validation attack that has been used by adversaries to steal user cookies that can be exploited to gain access as an authenticated user to a cloud-based service? Attackers also have used these vulnerabilities to redirect users to malicious sites.
a. DNS attacks
b. HTML injection
c. SQL injection
d. XSS
10. Which of the following is a type of attack where the attacker could attempt to compromise the cloud by placing a malicious virtual machine in close proximity to a target cloud server?
a. Side-channel
b. Session riding
c. CSRF
d. Man-in-the-browser attack
Everyone is using the cloud or deploying hybrid solutions to host their applications. The reason is that many organizations are looking to transition from capital expenditure (CapEx) to operational expenditure (OpEx). The majority of today’s enterprises operate in a multicloud environment. It is obvious that cloud computing security is more important than ever.
Note
Cloud computing security includes many of the same functionalities as traditional IT security. This includes protecting critical information from theft, data exfiltration, and deletion, as well as privacy.
The advantages of using a cloud-based service include the following:
• Distributed storage
• Scalability
• Resource pooling
• Access from any location
• Measured service
• Automated management
The National Institute of Standards and Technology (NIST) authored Special Publication (SP) 800-145, “The NIST Definition of Cloud Computing,” to provide a standard set of definitions for the different aspects of cloud computing. The SP 800-145 document also compares the different cloud services and deployment strategies.
According to NIST, the essential characteristics of cloud computing include the following:
• On-demand self-service
• Broad network access
• Resource pooling
• Rapid elasticity
• Measured service
Cloud deployment models include the following:
• Public cloud: Open for public use. Examples include Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and Digital Ocean.
• Private cloud: Used just by the client organization on the premises (on-prem) or at a dedicated area in a cloud provider.
• Community cloud: Shared between multiple organizations.
• Hybrid cloud: Composed of two or more clouds or cloud services (including on-prem services).
Cloud computing can be broken into the following three basic models:
• Infrastructure as a Service (IaaS): IaaS describes a cloud solution where you rent infrastructure. You purchase virtual power to execute your software as needed. This is much like running a virtual server on your own equipment, except you are now running a virtual server on a virtual disk. This model is similar to a utility company model because you pay for what you use. Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), and Digital Ocean all provide IaaS solutions.
• Platform as a Service (PaaS): PaaS provides everything except applications. Services provided by this model include all phases of the system development life cycle (SDLC) and can use application programming interfaces (APIs), website portals, or gateway software. These solutions tend to be proprietary, which can cause problems if the customer moves away from the provider’s platform.
• Software as a Service (SaaS): SaaS is designed to provide a complete packaged solution. The software is rented out to the user. The service is usually provided through some type of front end or web portal. While the end user is free to use the service from anywhere, the company pays a per-use fee. Examples of SaaS offerings include Cisco WebEx, Office 365, and Google G-Suite.
Note
NIST Special Publication 500-292, “NIST Cloud Computing Reference Architecture,” is another resource for learning more about cloud architecture.
Cloud service providers (CSPs) such as Azure, AWS, and GCP have no choice but to take their security and compliance responsibilities very seriously. For instance, Amazon created a Shared Responsibility Model to describe the respective responsibilities of the AWS customers and Amazon’s responsibilities in detail. The Amazon Shared Responsibility Model can be accessed at https://aws.amazon.com/compliance/shared-responsibility-model.
The shared responsibility depends on the type of cloud model (SaaS, PaaS, or IaaS). Figure 2-1 shows the responsibilities of a CSP and its customers in a SaaS environment.
Figure 2-2 shows the responsibilities of a CSP and its customers in a PaaS environment.
Figure 2-3 shows the responsibilities of a CSP and its customers in an IaaS environment.
Regardless of the model used, cloud security is the responsibility of both the client and the cloud provider. These details will need to be worked out before a cloud computing contract is signed. The contracts will vary depending on the given security requirements of the client. Considerations include disaster recovery, service-level agreements (SLAs), data integrity, and encryption. For example, is encryption provided end to end or just at the cloud provider? Also, who manages the encryption keys—the cloud provider or the client? Overall, you want to ensure that the cloud provider has the same layers of security (logical, physical, and administrative) in place that you would have for services you control.
Patch management in the cloud is also a shared responsibility in IaaS and PaaS environments, but not in a SaaS environment. For example, in a SaaS environment, the CSP is responsible for patching all software and hardware vulnerabilities. However, in an IaaS environment, the CSP is responsible only for patching the hypervisors, physical compute and storage servers, and the physical network. You are responsible for patching the applications, operating systems (VMs), and any virtual networks you deploy.
When performing penetration testing in the cloud, you must first understand what you can do and what you cannot do. Most CSPs have detailed guidelines on how to perform security assessments and penetration testing in the cloud. Regardless, there are many potential threats when organizations move to a cloud model. For example, although your data is in the cloud, it must reside in a physical location somewhere. Your cloud provider should agree in writing to provide the level of security required for your customers.
DevOps is composed of many technical, project management, and management movements. Before DevOps, there were a few development methodologies. One of the original development methodologies is called the waterfall model. The waterfall model is a software and hardware development and project management methodology that has at least five to seven phases that follow in strict linear order. Each phase cannot start until the previous phase has been completed.
Figure 2-4 illustrates the typical phases of the waterfall development methodology.
One of the main reasons that organizations have used the waterfall model is that project requirements are agreed upon from the beginning; subsequently, planning and scheduling are simple and clear. With a fully laid-out project schedule, an accurate estimate can be given, including development project cost, resources, and deadlines. Another reason is that measuring progress is easy as you move through the phases and hit the different milestones. Your end customer is not perpetually adding new requirements to the project, thus delaying production.
There also are several disadvantages in the waterfall methodology. One of the disadvantages is that it can be difficult for customers to enumerate and communicate all of their needs at the beginning of the project. If your end customer is dissatisfied with the product in the verification phase, going back and designing the code again can be very costly. In the waterfall methodology, a linear project plan is rigid and lacks flexibility for adapting to unexpected events.
Agile is a software development and project management process where a project is managed by breaking it up into several stages and involving constant collaboration with stakeholders and continuous improvement and iteration at every stage. The Agile methodology begins with end customers describing how the final product will be used and clearly articulating what problem it will solve. Once the coding begins, the respective teams cycle through a process of planning, executing, and evaluating. This process may allow the final deliverable to change to better fit the customer’s needs. In an Agile environment, continuous collaboration is key. Clear and ongoing communication among team members and project stakeholders allows for fully informed decisions to be made.
Note
The Agile methodology was originally developed by 17 people in 2001 in written form, and it is documented at “The Manifesto for Agile Software Development” (https://agilemanifesto.org).
In Agile, the input to the development process is the creation of a business objective, concept, idea, or hypothesis. Then the work is added to a committed “backlog.” From there, software development teams that follow the standard Agile or iterative process will transform that idea into “user stories” and some sort of feature specification. This specification is then implemented in code. The code is then checked in to a version control repository (for example, GitLab or GitHub), where each change is integrated and tested with the rest of the software system.
In Agile, value is created only when services are running in production; subsequently, you must ensure that you are not only delivering fast flow, but that your deployments can also be performed without causing chaos and disruptions, such as service outages, service impairments, or security or compliance failures.
A concept adopted by many organizations related to Agile is called Scrum. Scrum is a framework that helps organizations work together because it encourages teams to learn through experiences, self-organize while working on a solution, and reflect on their wins and losses to continuously improve. Scrum is used by software development teams; however, its principles and lessons can be applied to all kinds of teamwork. Scrum describes a set of meetings, tools, and roles that work in concert to help teams structure and manage their work.
The Scrum framework uses the concept of “sprints” (a short, time-boxed period when a Scrum team works to complete a predefined amount of work). Sprints are one of the key concepts of the Scrum and Agile methodologies.
Tip
The following video provides a good overview of the Agile methodology: www.youtube.com/watch?v=Z9QbYZh1YXY. The following GitHub repository includes a detailed list of resources related to the Agile methodology: https://github.com/lorabv/awesome-agile.
Agile also uses the Kanban process. Kanban is a scheduling system for Lean development and just-in-time (JIT) manufacturing originally developed by Taiichi Ohno from Toyota.
DevOps is the outcome of many trusted principles—from software development, manufacturing, and leadership to the information technology value stream. DevOps relies on bodies of knowledge from Lean, Theory of Constraints, resilience engineering, learning organizations, safety culture, human factors, and many others. Today’s technology DevOps value stream includes the following areas:
• Product management
• Software (or hardware) development
• Quality assurance (QA)
• IT operations
• Infosec and cybersecurity practices
There are three general ways (or methods) to DevOps:
• The first way includes systems and flow. In this way (or method), you make work visible by reducing the work “batch” sizes, reducing intervals of work, and preventing defects from being introduced by building in quality and control.
• The second way includes a feedback loop to prevent problems from happening again (enabling faster detection and recovery by seeing problems as they occur and maximizing opportunities to learn and improve).
• The third way is continuous experimentation and learning). In a true DevOps environment, you conduct dynamic, disciplined experimentation and take risks. You also define the time to fix issues and make systems better. The creation of shared code repositories helps tremendously in achieving this continuous experimentation and learning process.
Continuous Integration (CI) is a software development practice where programmers merge code changes in a central repository multiple times a day. Continuous Delivery (CD) sits on top of CI and provides a way for automating the entire software release process. When you adopt CI/CD methodologies, each change in code should trigger an automated build-and-test sequence. This automation should also provide feedback to the programmers who made the change.
Note
CI/CD has been adopted by many organizations that provide cloud services (that is, SaaS, PaaS, and so on). For instance, CD can include cloud infrastructure provisioning and deployment, which traditionally have been done manually and consist of multiple stages. The main goal of the CI/CD processes is to be fully automated, with each run fully logged and visible to the entire team.
With CI/CD, most software releases go through the set of stages illustrated in Figure 2-5. A failure at any stage typically triggers a notification. For example, you can use Cisco WebEx Teams or Slack to let the responsible developers know about the cause of a given failure or to send notifications to the whole team after each successful deployment to production.
In Figure 2-5, the pipeline run is triggered by a source code repository (Git in this example). The code change typically sends a notification to a CI/CD tool, which runs the corresponding pipeline. Other notifications include automatically scheduled or user-initiated workflows, as well as results of other pipelines
Note
The Build stage includes the compilation of programs written in languages such as Java, C/C++, and Go. In contrast, Ruby, Python, and JavaScript programs work without this step; however, they could be deployed using Docker and other container technologies. Regardless of the language, cloud-native software is typically deployed with containers (in a microservice environment).
In the Test stage, automated tests are run to validate the code and the application behavior. The Test stage is important because it acts as a safety net to prevent easily reproducible bugs from being introduced. This concept can be applied to preventing security vulnerabilities, since at the end of the day, a security vulnerability is typically a software (or hardware) bug. The responsibility of writing tests scripts can fall to a developer or a dedicated QA engineer. However, it is best done while new code is being written.
After you have a built your code and passed all predefined tests, you are ready to deploy it (the Deploy stage). Traditionally, engineers have used multiple deploy environments (for example, a beta or staging environment used internally by the product team and a production environment).
Note
Organizations that have adopted the Agile methodology usually deploy work-in-progress manually to a staging environment for additional manual testing and review, and automatically deploy approved changes from the master branch to production.
First, serverless does not mean that you do not need a server somewhere. Instead, it means that you will be using cloud platforms to host and/or to develop your code. For example, you might have a serverless app that is distributed in a cloud provider like AWS, Azure, or Google Cloud Platform.
Serverless is a cloud computing execution model where the cloud provider (AWS, Azure, Google Cloud, and so on) dynamically manages the allocation and provisioning of servers. Serverless applications run in stateless containers that are ephemeral and event-triggered (fully managed by the cloud provider).
AWS Lambda is one of the most popular serverless architectures in the industry.
Note
In AWS Lambda, you run code without provisioning or managing servers, and you pay only for the compute time you consume. When you upload your code, Lambda takes care of everything required to run and scale your application (offering high availability and redundancy).
As demonstrated in Figure 2-6, computing has evolved from traditional physical (bare-metal) servers to virtual machines (VMs), containers, and serverless architectures.
Before you can even think of building a distributed system, you must first understand how the container images that contain your applications make up all the underlying pieces of such a distributed system. Applications are normally composed of a language runtime, libraries, and source code. For instance, your application may use third-party or open-source shared libraries such as libc and OpenSSL. These shared libraries are typically shipped as shared components in the operating system that you installed on a system. The dependency on these libraries introduces difficulties when an application developed on your desktop, laptop, or any other development machine (dev system) has a dependency on a shared library that isn’t available when the program is deployed out to the production system. Even when the dev and production systems share the exact same version of the operating system, bugs can occur when programmers forget to include dependent asset files inside a package that they deploy to production.
The good news is that you can package applications in a way that makes it easy to share them with others. This is an example where containers become very useful. Docker, one of the most popular container runtime engines, makes it easy to package an executable and push it to a remote registry where it can later be pulled by others.
Note
Container registries are available in all of the major public cloud providers (for example, AWS, Google Cloud Platform, and Microsoft Azure) as well as services to build images. You can also run your own registry using open-source or commercial systems. These registries make it easy for developers to manage and deploy private images, while image-builder services provide easy integration with continuous delivery systems.
Container images bundle a program and its dependencies into a single artifact under a root file system. Containers are made up of a series of file system layers. Each layer adds, removes, or modifies files from the preceding layer in the file system. The overlay system is used both when packaging the image and when the image is actually being used. During runtime, there are a variety of different concrete implementations of such file systems, including aufs, overlay, and overlay2.
Tip
The most popular container image format is the Docker image format, which has been standardized by the Open Container Initiative (OCI) to the OCI image format. Kubernetes supports both Docker and OCI images. Docker images also include additional metadata used by a container runtime to start a running application instance based on the contents of the container image.
Let’s look at an example of how container images work. Figure 2-7 shows three container images: A, B, and C. Container Image B is “forked” from Container Image A. Then, in Container Image B, Python version 3 is added. Furthermore, Container Image C is built on Container Image B, and the programmer adds OpenSSL and nginx to develop a web server and enable TLS.
Abstractly, each container image layer builds on the previous one. Each parent reference is a pointer. The example in Figure 2-7 includes a simple set of containers; in many environments, you will encounter a much larger directed acyclic graph.
Multiple technologies and solutions have been used to manage, deploy, and orchestrate containers in the industry. The following are the most popular:
• Kubernetes: One of the most popular container orchestration and management frameworks. Originally developed by Google, Kubernetes is a platform for creating, deploying, and managing distributed applications. You can download Kubernetes and access its documentation at https://kubernetes.io.
• Nomad: A container management and orchestration platform by HashCorp. You can download and obtain detailed information about Nomad at www.nomadproject.io.
• Apache Mesos: A distributed Linux kernel that provides native support for launching containers with Docker and AppC images. You can download Apache Mesos and access its documentation at https://mesos.apache.org.
• Docker Swarm: A container cluster management and orchestration system integrated with the Docker Engine. You can access the Docker Swarm documentation at https://docs.docker.com/engine/swarm.
Tip
You can practice and deploy your first container by using Katacoda, which is an interactive system that allows you to learn many different technologies, including Docker, Kubernetes, Git, and Tensorflow. You can access Katacoda at www.katacoda.com. Katacoda provides numerous interactive scenarios. For instance, you can use the “Deploying your first container” scenario to learn (hands-on) Docker: www.katacoda.com/courses/docker/deploying-first-container.
You can access the Docker documentation at https://docs.docker.com. You can also complete a free and quick hands-on tutorial to learn more about Docker containers at www.katacoda.com/courses/container-runtimes/what-is-a-container-image.
Organizations face many potential threats when moving to a cloud model. For example, although your data is in the cloud, it must reside in a physical location somewhere. Your cloud provider should agree in writing to provide the level of security required for your customers.
The following are questions to ask a cloud provider before signing a contract for its services:
• Who has access? Access control is a key concern because insider attacks are a huge risk. Anyone who has been approved to access the cloud has the potential of mishandling or exposing data to unauthorized users, so you want to know who has access and how they were screened. Another example where you want to monitor who has access to what cloud service is when an employee leaves your organization, and he or she was the only “administrator,” and then you find out that you don’t have the password to the cloud service, or the cloud service gets canceled because maybe the bill didn’t get paid. This example seems like an immature way of handling a production service, but this still happens in today’s environment.
• What are your regulatory requirements? Organizations operating in the United States, Canada, or the European Union have many regulatory requirements that they must abide by (for example, ISO/IEC 27002, EU-U.S. Privacy Shield Framework, ITIL, FedRAMP, and COBIT). You must ensure that your cloud provider can meet these requirements and is willing to undergo certification, accreditation, and review.
Note
Federal Risk and Authorization Management Program (FedRAMP) is a United States government program and certification that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services. FedRAMP is mandatory for United States Federal Agency cloud deployments and service models at the low-, moderate-, and high-risk impact levels. Cloud offerings such as Cisco WebEx, Duo Security, Cloudlock, and others are FedRAMP certified. Additional information about FedRAMP can be obtained from www.fedramp.gov and www.cisco.com/c/en/us/solutions/industries/government/federal-government-solutions/fedramp.html.
• Do you have the right to audit? This particular item is no small matter in that the cloud provider should agree in writing to the terms of the audit. With cloud computing, maintaining compliance could become more difficult to achieve and even harder to demonstrate to auditors and assessors. Of the many regulations touching on information technology, few were written with cloud computing in mind. Auditors and assessors might not be familiar with cloud computing generally or with a given cloud service in particular.
Note
Division of compliance responsibilities between cloud provider and cloud customer must be determined before any contracts are signed or service is started.
• What type of training does the provider offer its employees? This is a rather important item to consider because people will always be the weakest link in security. Knowing how your provider trains its employees is an important item to review.
• What type of data classification system does the provider use? Questions you should be concerned with here include what data classified standard is being used and whether the provider even uses data classification.
• How is your data separated from other users’ data? Is the data on a shared server or a dedicated system? A dedicated server means that your information is the only thing on the server. With a shared server, the amount of disk space, processing power, bandwidth, and so on is limited because others are sharing this device. If the server is shared, the data could potentially become comingled in some way.
• Is encryption being used? Encryption should be discussed. Is it being used while the data is at rest and in transit? You will also want to know what type of encryption is being used. For example, there are big technical differences between DES and AES. For both of these algorithms, however, the basic questions are the same: Who maintains control of the encryption keys? Is the data encrypted at rest in the cloud? Is the data encrypted in transit, or is it encrypted at rest and in transit? Additionally, are you performing end-to-end encryption, or does the encryption stop somewhere in between the user and the application (perhaps some mid-layer in the cloud provider)?
• What are the service-level agreement (SLA) terms? The SLA serves as a contracted level of guaranteed service between the cloud provider and the customer that specifies what level of services will be provided.
• What is the long-term viability of the provider? How long has the cloud provider been in business, and what is its track record? If it goes out of business, what happens to your data? Will your data be returned and, if so, in what format?
• Will the provider assume liability in the case of a breach? If a security incident occurs, what support will you receive from the cloud provider? While many providers promote their services as being “unhackable,” cloud-based services are an attractive target to hackers.
• What is the disaster recovery/business continuity plan (DR/BCP)? Although you might not know the physical location of your services, it is physically located somewhere. All physical locations face threats such as fire, storms, natural disasters, and loss of power. In case of any of these events, how will the cloud provider respond, and what guarantee of continued services does it promise?
Even when you end a contract, you must ask what happens to the information after your contract with the cloud service provider ends.
Note
Insufficient due diligence is one of the biggest issues when moving to the cloud. Security professionals must verify that issues such as encryption, compliance, incident response, and so forth are all worked out before a contract is signed.
Because cloud-based services are accessible via the Internet, they are open to any number of attacks. As more companies move to cloud computing, look for hackers to follow. Some of the potential attack vectors that criminals might attempt include the following:
• Denial of service (DoS): DoS and distributed denial-of-service attacks (DDoS) are still threats nowadays. In Chapter 1, “Cybersecurity Fundamentals,” you learned how adversaries have used many techniques including directed, reflected, and amplified DoS and DDoS attacks to cause service disruption.
• Session hijacking: This type of attack occurs when the attacker can sniff traffic and intercept traffic to take over a legitimate connection to a cloud service.
• DNS attacks: These are attacks against the DNS infrastructure, DNS poisoning attacks, and DNS Zone Transfer attacks.
• Cross-site scripting (XSS): Adversaries have used this input validation attack to steal user cookies that can be exploited to gain access as authenticated users to a cloud-based service. Attackers also have used these vulnerabilities to redirect users to malicious sites.
• Shared technology and multitenancy concerns: Cloud providers typically support a large number of tenants (their customers) by leveraging a common and shared underlying infrastructure. This requires a specific level of diligence with configuration management, patching, and auditing (especially with technologies such as virtual machine hypervisors, container management, and orchestration).
• Hypervisor attacks: If the hypervisor is compromised, all hosted virtual machines could potentially be compromised as well. This type of attack could also compromise systems and likely multiple cloud consumers (tenants).
• Virtual machine (VM) attacks: Virtual machines are susceptible to several of the same traditional security attacks as a physical server. However, if a virtual machine is susceptible to a VM escape attack, this raises the possibility of attacks across the virtual machines. A VM escape attack is a type of attack where the attacker can manipulate the guest-level VM to attack its underlying hypervisor, other VMs, and/or the physical host.
• Cross-site request forgery (CSRF): This is another category of web application vulnerability and related attacks that have also been used to steal cookies and for user redirection. CSRF, in particular, leverages the trust that the application has in the user. For instance, if an attacker can leverage this type of vulnerability to manipulate an administrator or a privileged user, this attack could be more severe than XSS.
• SQL injection: This type of attack exploits vulnerable cloud-based applications that allow attackers to pass SQL commands to a database for execution.
• Session riding: Many organizations use this term to describe a cross-site request forgery attack. Attackers use this technique to transmit unauthorized commands by riding an active session using an email or malicious link to trick users while they are currently logged in to a cloud service.
• Distributed denial-of-service (DDoS) attacks: Some security professionals have argued that the cloud is more vulnerable to DDoS attacks because it is shared by many users and organizations, which also makes any DDoS attack much more damaging.
• Man-in-the-middle cryptographic attacks: This type of attack is carried out when an attacker places himself in the communications path between two users. Anytime the attacker can do this, there is the possibility that he can intercept and modify communications.
• Side-channel attacks: An attacker could attempt to compromise the cloud by placing a malicious virtual machine in close proximity to a target cloud server and then launching a side-channel attack.
• Authentication attacks (insufficient identity, credentials, and access management): Authentication is a weak point in hosted and virtual services and is frequently targeted. There are many ways to authenticate users, such as based on what a person knows, has, or is. The mechanisms used to secure the authentication process and the method of authentication used are frequent targets of attackers.
• API attacks: Often APIs are configured insecurely. An attacker can take advantage of API misconfigurations to modify, delete, or append data in applications or systems in cloud environments.
• Known exploits leveraging vulnerabilities against infrastructure components: As you already know, no software or hardware is immune to vulnerabilities. Attackers can leverage known vulnerabilities against virtualization environments, Kubernetes, containers, authentication methods, and so on.
Tip
The Cloud Security Alliance has a working group tasked to define the top cloud security threats. Details are available at https://cloudsecurityalliance.org/research/working-groups/top-threats. “The Cloud Security Alliance Top Threats Deep Dive” white paper is posted at the following GitHub Repository: https://github.com/The-Art-of-Hacking/h4cker/blob/master/SCOR/top-threats-to-cloud-computing-deep-dive.pdf
Additional best practices and cloud security research articles can be found at the following site: https://cloudsecurityalliance.org/research/artifacts/.
Review the most important topics in this chapter, noted with the Key Topic icon in the outer margin of the page. Table 2-2 lists these key topics and the page numbers on which each is found.
Define the following key terms from this chapter and check your answers in the glossary:
Infrastructure as a Service (IaaS)
The answers to these questions appear in Appendix A, “Answers to the ‘Do I Know This Already?’ Quizzes and Review Questions.” For more practice with exam format questions, use the exam engine on the website.
1. A PaaS cloud typically provides what infrastructure?
2. What is the disadvantage of the waterfall development methodology?
3. What is an element of the Scrum framework?
4. What are examples of the DevOps value stream?
5. What is a software development practice where programmers merge code changes in a central repository multiple times a day?
6. What is a technology that bundles a program and its dependencies into a single artifact under a root file system? These items are made up of a series of file system layers. Each layer adds, removes, or modifies files from the preceding layer in the file system.
7. List container management and orchestration platforms.
8. What is a type of cloud deployment model where the cloud environment is shared among different organizations?
9. What is a United States government program and certification that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services?
10. What are examples of cloud security threats?