2
Cloud Infrastructure Security

Mohammad GhasemiGol

Department of Computer Engineering, University of Birjand, Birjand, Iran

2.1 Introduction

Cloud infrastructure consists of servers, storage, network, management and deployment software, and platform virtualization. Therefore, cloud infrastructure security is the most important part of cloud security, and any attacks to the cloud infrastructure will cause a large amount of service disruption. On the other hand, virtualization is an important underlying technology in cloud infrastructures that provides dynamic resource allocation and service provisioning, especially in Infrastructure‐as‐a‐Service (IaaS). With this technology, multiple operating systems (OSs) can co‐reside on the same physical machine without interfering with each other (Xiao and Xiao 2013). However, virtualization is the source of a significant security concern in cloud infrastructure. Because multiple VMs run on the same server, and because the virtualization layer plays a considerable role in the operation of a VM, a malicious party has the opportunity to attack the virtualization layer. A successful attack would give the malicious party control over the all‐powerful virtualization layer, potentially compromising the confidentiality and integrity of the software and data of any VM (Keller et al. 2010).

Although infrastructure security is more relevant to customers of IaaS, similar consideration should be given to providers' Platform‐as‐a‐Service (PaaS) and Software‐as‐a‐Service (SaaS) environments, since they have ramifications for customers' threat, risk, and compliance management. When discussing public clouds, the scope of infrastructure security is limited to the layers of infrastructure that move beyond the organization's control and into the hands of service providers (i.e. when responsibility for a secure infrastructure is transferred to the cloud service provider [CSP], based on the Service Provider Interface [SPI] delivery model) (Mather et al. 2009).

This chapter discusses cloud security from an infrastructure perspective. The rest of the chapter is organized as follows: cloud infrastructure security is discussed in Section 2.2. In Section 2.3, we focus on the role of the hypervisor security in the Cloud. We analyze the infrastructure security in several existing cloud platforms in Section 2.4. Section 2.5 discusses some countermeasure to protect cloud infrastructure against various threats and vulnerabilities. Finally, conclusions are drawn in Section 2.5.

2.2 Infrastructure Security in the Cloud

Cloud infrastructure consists of servers, storage, network, management and deployment software, and platform virtualization. Hence, infrastructure security can be assessed in different areas. In this section, we look at the network level, host level, and application level of infrastructure security and the issues surrounding each level with specific regard to cloud computing. At the network level, although there are definitely security challenges with cloud computing, none of them are caused specifically by cloud computing. All of the network‐level security challenges associated with cloud computing are exacerbated by cloud computing, not caused by it. Likewise, security issues at the host level, such as an increased need for host‐perimeter security (as opposed to organizational entity‐perimeter security) and secured virtualized environments, are exacerbated by cloud computing but not specifically caused by it. The same holds true at the application level. Certainly there is an increased need for secure software development life cycles due to the public‐facing nature of (public) cloud applications and the need to ensure that application programming interfaces (APIs) have been thoroughly tested for security, but those application‐level security requirements are again exacerbated by cloud computing, not caused by it.

Therefore, the issues of infrastructure security and cloud computing are about understanding which party provides which aspects of security (i.e. does the customer provide it, or does the CSP provide it?) – in other words, defining trust boundaries. With regard to infrastructure security, an undeniable conclusion is that trust boundaries between customers and CSPs have moved. When we see poll after poll of information executives (e.g. CIOs) and information security professionals (e.g. CISOs) indicating that security is their number‐one concern related to cloud computing, the primary cause for that concern is altered trust boundaries. To be more specific, the issue is not so much that the boundaries have moved, but more importantly that customers are unsure where the trust boundaries have moved. Many CSPs have not clearly articulated those trust boundaries (e.g. what security is provided by the CSP versus what security still needs to be provided by the customer), nor are the new trust boundaries reinforced in operational obligations such as service‐level agreements (SLAs).

Although CSPs have the primary responsibility for articulating the new trust boundaries, some of the current confusion is also the fault of information security personnel. Some information security professionals, either fearing something new or not fully understanding cloud computing, are engaging in fear, uncertainty, and doubt (FUD) with their business customers. Similar to confusion over moved trust boundaries is the fact that the established model of network tiers or zones no longer exists. That model has been replaced with domains, which are less precise and afford less protection than the old model. (Domain names are used in various networking contexts and application‐specific naming and addressing purposes based on the Domain Name System [DNS].) If we can no longer trust the network (organizational) perimeter to provide sufficient protection and are now reliant on host perimeter security, what is the trust model between hosts?

An analogy to this problem exists and was dealt with 20 years ago: Secure Telephone Unit (STU) IIIs used by the U.S. Department of Defense (DoD) and the intelligence community. In that model, each STU‐III unit (a host) was responsible for its own “perimeter security” (i.e. the device's electronic components were tamper resistant), and each device had a secure authentication mechanism (i.e. a dongle with an identity written on it, protected and verified by asymmetric encryption and Public Key Infrastructure [PKI]). Additionally, each device would negotiate a common level of authorization (classification level) based on an attribute included with the identity in the dongle.

Today, we have no such model in cloud computing. The STU‐III model simply is not viable for cloud computing, and there is no trusted computing platform for virtual machine (VM) environments. Therefore, host‐to‐host authentication and authorization is problematic in cloud computing, since much of it uses virtualization. Today the use of federated identity management is focused on trust, identity, and authentication of people. The identity management solutions of today do assist in managing host‐level access; however, no viable solution addresses the issue of host‐to‐host trust. This issue is exacerbated in cloud computing because of the sheer number of resources available. Conceptually similar to the trust‐boundary problem at the application level is ensuring that one customer's data is not inadvertently provided to another, unauthorized customer. Data has to be securely labeled to ensure that it remains separated among customers in a multitenancy environment. Today, data separation in cloud computing is logical, not physical, as was the case previously, and there are valid concerns about the adequacy of that logical separation (Mather et al. 2009).

2.2.1 Infrastructure Security: The Network Level

When looking at the network level of infrastructure security, it is important to distinguish between public clouds and private clouds, as we explained in Chapter 1. With private clouds, there are no new attacks, vulnerabilities, or changes in risk specific to this topology that information security personnel need to consider. Although your organization's IT architecture may change with the implementation of a private cloud, your current network topology probably will not change significantly. If you have a private extranet in place (e.g. for premium customers or strategic partners), for practical purposes you probably have the network topology for a private cloud in place already. The security considerations you have today apply to a private cloud infrastructure, too. And the security tools you have in place (or should have in place) are also necessary for a private cloud and operate in the same way.

However, if you choose to use public cloud services, changing security requirements will require changes to your network topology. You must address how your existing network topology interacts with your cloud provider's network topology. There are four significant risk factors in this use case:

  • Ensuring the confidentiality and integrity of your organization's data in transit to and from your public cloud provider
  • Ensuring proper access control (authentication, authorization, and auditing) to whatever resources you are using at your public cloud provider
  • Ensuring the availability of the Internet‐facing resources in a public cloud that are being used by your organization, or have been assigned to your organization by your public cloud provider
  • Replacing the established model of network zones and tiers with domains

We will discuss each of these risk factors in the sections that follow.

2.2.1.1 Network‐Level Mitigation

Given the factors discussed in the preceding sections, what can you do to mitigate these increased risk factors? First, note that network‐level risks exist regardless of what aspects of cloud computing services are being used (e.g. SaaS, PaaS, or IaaS). The primary determination of risk level therefore is not which services are being used, but rather whether your organization intends to use or is using a public, private, or hybrid cloud. Although some IaaS clouds offer virtual‐network zoning, they may not match an internal private cloud environment that performs stateful inspection and other network security measures.

If your organization is large enough to afford the resources of a private cloud, your risks will decrease, assuming you have a true private cloud that is internal to your network. In some cases, a private cloud located at a cloud provider's facility can help meet your security requirements but will depend on the provider's capabilities and maturity. You can reduce risks related to confidentiality by using encryption: specifically, by using validated implementations of cryptography for data in transit. Secure digital signatures make it much more difficult, if not impossible, for someone to tamper with your data, and this ensures data integrity. Availability problems at the network level are far more difficult to mitigate with cloud computing unless your organization is using a private cloud that is internal to your network topology. Even if your private cloud is a private (i.e. nonshared) external network at a cloud provider's facility, you will face increased risk at the network level. A public cloud faces even greater risk. But let's keep some perspective: greater than what?

Even large enterprises with significant resources face considerable challenges at the network level of infrastructure security. Are the risks associated with cloud computing actually higher than the risks enterprises are facing today? Consider existing private and public extranets, and take into account partner connections when making such a comparison. For large enterprises without significant resources, or for small to medium‐size businesses (SMBs), is the risk of using public clouds (assuming that such enterprises lack the resources necessary for private clouds) really higher than the risks inherent in their current infrastructures? In many cases, the answer is probably no: there is not a higher level of risk.

2.2.2 Infrastructure Security: The Host Level

When reviewing host security and assessing risks, you should consider the context of cloud service delivery models (SaaS, PaaS, and IaaS) and deployment models (public, private, and hybrid). Although there are no known new threats to hosts that are specific to cloud computing, some virtualization security threats such as VM escape, system‐configuration drift, and insider threats by way of weak access control to the hypervisor carry into the public cloud computing environment. The dynamic nature (elasticity) of cloud computing can bring new operational challenges from a security‐management perspective. The operational model motivates rapid provisioning and fleeting instances of VMs. Managing vulnerabilities and patches is therefore much harder than just running a scan, because the rate of change is much greater than in a traditional data center.

In addition, the fact that the Cloud harnesses the power of thousands of compute nodes, combined with the homogeneity of the OS employed by hosts, means threats can be amplified quickly and easily. This is referred to as the velocity of attack factor in the Cloud. More importantly, you should understand the trust boundary and the responsibilities that fall on your shoulders to secure the host infrastructure you manage. And you should compare this with providers' responsibilities in securing the part of the host infrastructure the CSP manages.

2.2.2.1 SaaS and PaaS Host Security

In general, CSPs do not publicly share information related to their host platforms, host OSs, and processes in place to secure the hosts, since hackers can exploit that information when they try to intrude into the cloud service. Hence, in the context of SaaS (e.g. Salesforce.com, Workday.com) or PaaS (e.g. Google App Engine, Salesforce.com's Force.com) cloud services, host security is opaque to customers, and the responsibility of securing the hosts is relegated to the CSP. To get assurance from the CSP about the security hygiene of its hosts, you should ask the vendor to share information under a non‐disclosure agreement (NDA) or simply demand that the CSP share the information via a controls‐assessment framework such as SysTrust or International Organization for Standardization (ISO) 27002. From a controls‐assurance perspective, the CSP has to ensure that appropriate preventive and detective controls are in place and must do so via a third‐party assessment or ISO 27002 type assessment framework.

Since virtualization is a key enabling technology that improves host hardware utilization, among other benefits, it is common for CSPs to employ virtualization platforms, including Xen and VMware hypervisors, in their host computing platform architecture. You should understand how the provider is using virtualization technology and the provider's process for securing the virtualization layer. Both the PaaS and SaaS platforms abstract and hide the host OS from end users with a host abstraction layer. One key difference between PaaS and SaaS is the accessibility of the abstraction layer that hides the OS services the applications consume. In the case of SaaS, the abstraction layer is not visible to users and is available only to developers and the CSP's operations staff, whereas PaaS users are given indirect access to the host abstraction layer in the form of a PaaS API that in turn interacts with the host abstraction layer. In short, if you are a SaaS or PaaS customer, you are relying on the CSP to provide a secure host platform on which the SaaS or PaaS application is developed and deployed by the CSP and you, respectively.

In summary, host security responsibilities in SaaS and PaaS services are transferred to the CSP. The fact that you do not have to worry about protecting hosts from host‐based security threats is a major benefit from a security management and cost standpoint. However, as a customer, you still own the risk of managing information hosted in the cloud services. It's your responsibility to get the appropriate level of assurance regarding how the CSP manages host security hygiene.

2.2.2.2 IaaS Host Security

Unlike with PaaS and SaaS, IaaS customers are primarily responsible for securing the hosts provisioned in the Cloud. Given that almost all IaaS services available today employ virtualization at the host layer, host security in IaaS should be categorized as follows:

  • Virtualization software security – The software layer sits on top of bare metal and provides customers with the ability to create and destroy virtual instances. Virtualization at the host level can be accomplished using any of the virtualization models, including OS‐level virtualization (Solaris containers, BSD [Berkeley Software Distribution] jails, Linux‐VServer), paravirtualization (a combination of the hardware version and versions of Xen and VMware), or hardware‐based virtualization (Xen, VMware, Microsoft Hyper‐V). It is important to secure this layer of software that sits between the hardware and the virtual servers. In a public IaaS service, customers do not have access to this software layer; it is managed by the CSP.
  • Customer guest OS or virtual server security – The virtual instance of an OS that is provisioned on top of the virtualization layer and is visible to customers from the Internet: e.g. various flavors of Linux, Microsoft, and Solaris. Customers have full access to virtual servers.

2.2.3 Infrastructure Security: The Application Level

Application or software security should be a critical element of your security program. Most enterprises with information security programs have yet to institute an application security program to address this realm. Designing and implementing applications targeted for deployment on a cloud platform requires that existing application security programs reevaluate current practices and standards. The application security spectrum ranges from standalone single‐user applications to sophisticated multiuser e‐commerce applications used by millions of users. Web applications such as CMSs, wikis, portals, bulletin boards, and discussion forums are used by small and large organizations. Many organizations also develop and maintain custom‐built web applications for their businesses using various web frameworks (PHP, .NET, J2EE, Ruby on Rails, Python, etc.). According to SANS (Northcutt et al. 2008), until 2007, few criminals attacked vulnerable websites because other attack vectors were more likely to lead to an advantage in unauthorized economic or information access. Increasingly, however, advances in cross‐site scripting (XSS) and other attacks have demonstrated that criminals looking for financial gain can exploit vulnerabilities resulting from web programming errors as new ways to penetrate important organizations. Here, we will limit our discussion to web application security: web applications in the Cloud accessed by users with a standard Internet browser, such as Firefox, Internet Explorer, or Safari, from any computer connected to the Internet.

2.2.4 Hypervisor Security in the Cloud

Before the discussion of data and application security in the Cloud, we first need to focus first on security and the role of the hypervisor and then the servers on which user services are based. A hypervisor is also called a virtual machine manager (VMM), which is one of many hardware‐virtualization techniques allowing multiple OSs to run concurrently on a host computer. The hypervisor is piggybacked on a kernel program, itself running on the core physical machine running as the physical server. The hypervisor presents to the guest OSs a virtual operating platform and manages the execution of the guest OSs. Multiple instances of a variety of OSs may share the virtualized hardware resources. Hypervisors are very commonly installed on server hardware, with the function of running guest OSs that themselves act as servers. The security of the hypervisor therefore involves the security of the underlying kernel program and the underlying physical machine, the physical server, and the individual virtual OSs and their anchoring VMs.

The key feature of the cloud computing model is the concept of virtualization. Virtualization gives the Cloud the near‐instant scalability and versatility that makes cloud computing so desirable a computing solution for companies and individuals. The core of virtualization in cloud computing is the easy process of minting VMs on demand with the hypervisor. The hypervisor allocates resources to each VM it creates and also handles the deletion of VMs. Since each VM is initiated by an instance, the hypervisor is a bidirectional conduit into and out of every VM. The compromise of either, therefore, creates a danger to the other. However, most hypervisors are constructed in such a way that there is a separation between the environments of the sandboxes (the VMs) and the hypervisor. There is just one hypervisor, which services all virtual sandboxes, each running a guest OS. The hypervisor runs as part of the native monolithic OS, side‐by‐side with the device drivers, file system, and network stack, completely in kernel space. So, one of the biggest security concerns with a hypervisor is the establishment of covert channels by an intruder. According to the Trusted Computer Security Evaluation Criteria (TCSEC), a covert channel is created by a sender process that modulates some condition (such as free space, availability of some service, or wait time to execute) that can be detected by a receiving process. If an intruder succeeds in establishing a covert channel, either by modifying file contents or through timing, it is possible for information to leak from one VM instance to another (Violino 2010).

Also, since the hypervisor is the controller of all VMs, it becomes the single point of failure in any cloud computing architecture. That is, if an intruder compromises a hypervisor, the intruder has control of all the VMs the hypervisor has allocated. This means the intruder can create or destroy VMs at will. For example, the intruder can perform a denial of service attack by bringing down the hypervisor, which then brings down all VMs running on top of the hypervisor.

The processes for securing virtual hosts differ greatly from processes used to secure their physical counterparts. Securing virtual entities like a hypervisor, virtual OSs, and corresponding VMs is more complex. To understand hypervisor security, let us first discuss the environment in which the hypervisor works. Recall that a hypervisor is part of a virtual computer system (VCS). In his 1973 thesis in the Division of Engineering and Applied Physics, Harvard University, Robert P. Goldberg defines a VCS as a hardware‐software duplicate of a real computer system in which a statistically dominant subset of the virtual processor's instructions execute directly on the host processor in native mode. He also gives two parts to this definition, the environment and implementation (Goldberg 1973):

  • Environment The VCS must simulate a real computer system. Programs and OSs that run on the real system must run on the virtual system with identical effect. Since the simulated machine may run at a different speed than the real one, timing‐dependent processor and I/O code may not perform exactly as intended.
  • Implementation Most instructions being executed must be processed directly by the host CPU without recourse to instruction‐by‐instruction interpretation. This guarantees that the VM will run on the host with relative efficiency. It also compels the VM to be similar or identical to the host, and forbids tampering with the control store to add new order code.

In the environment of VMs, a hypervisor is needed to control all the sandboxes (VMs). Generally, in practice, the underlying architecture of the hypervisor determines if there is true separation between the sandboxes. Goldberg classifies two types of hypervisor (Goldberg 1973):

  • Type‐1 (or native, bare‐metal) hypervisors run directly on the host's hardware to control the hardware and to manage guest OSs. All guest OSs then run on a level above the hypervisor. This model represents the classic implementation of VM architectures. Modern hypervisors based on this model include Citrix XenServer, VMware ESX/ESXi, and Microsoft Hyper‐V. The most common commercial hypervisors are based on a monolithic architecture. The underlying hypervisor services all virtual sandboxes, each running a guest OS. The hypervisor runs as part of the native monolithic OS, side‐by‐side with the device drivers, file system and network stack, completely in kernel space.
  • Type‐2 (or hosted) hypervisors run just above a host OS kernel such as Linux, Windows, and others. With the hypervisor layer as a distinct second software level, guest OSs run at the third level above the hardware. The host OS has direct access to the server's hardware, such as host CPU, memory, and I/O devices, and is responsible for managing basic OS services. The hypervisor creates VM environments and coordinates calls to CPU, memory, disk, network, and other resources through the host OS. Modern hypervisors based on this model include KVM and VirtualBox.

The discussion so far highlights the central role of the hypervisor in the operations of VM systems and points to its central role in securing all VM systems. Before we look at what can be done to secure it, let us ask what security breaches can happen to the hypervisor. Some malicious software, such as rootkit, masquerades as hypervisors in self‐installation phases.

Neil MacDonald, vice president, distinguished analyst, and a Gartner Fellow Emeritus at Gartner Research, based in Stamford, CT, published his observation about hypervisors and their vulnerabilities in his blog post “Yes, Hypervisors Are Vulnerable” (MacDonald 2011). His observations are summarized here (Kizza and Yang 2014):

  • The virtualization platform (hypervisor/VMM) is software written by human beings and will contain vulnerabilities. Microsoft, VMware, Citrix, and others, all will and have had vulnerabilities.
  • Some of these vulnerabilities will result in a breakdown in isolation that the virtualization platform was supposed to enforce.
  • Bad guys will target this layer with attacks. The benefits of compromising this layer are simply too great.
  • While there have been a few disclosed attacks, it is just a matter of time before a widespread publicly disclosed enterprise breach is tied back to a hypervisor vulnerability.

There have been a growing number of virtualization vulnerabilities. Published papers have so far shown that the security of hypervisors can be undermined. As far back as 2006, King and Chen demonstrated the use of a type of malware called a virtual‐machine based rootkit (VMBR), installing a VM monitor underneath an existing OS and hoisting the original OS into a VM (King and Chen 2006).

In their study, the authors demonstrated a malware program that started to act as its own hypervisor under Windows. We know that the hypervisor layer of virtualization, playing the core role in the virtualization process, is very vulnerable to hacking because it's the weakest link in the data center. Therefore, attacks on hypervisors are on the rise. Data from the IBM X‐Force 2010 Mid‐Year Trend and Risk Report (Young 2010) shows that every year since 2005, vulnerabilities in virtualization server products (the hypervisors) have overshadowed those in workstation products, an indication of hackers' interest in the hypervisors. The report further shows that 35% of server virtualization vulnerabilities allow an attacker to “escape” from a guest VM to affect other VMs or the hypervisor. Note that the hypervisor in a type‐1 environment is granted CPU privilege to access all system I/O resources and memory. This makes it a security threat to the whole cloud infrastructure. Just a single vulnerability in the hypervisor could result in a hacker gaining access to the entire system, including all guest OSs. Because malware runs below the entire OS, there is a growing threat of hackers using malware and rootkits to install themselves as a hypervisor below the OS, thus making them more difficult to detect. In a type‐2 hypervisor configuration, the microkernel architecture is designed specifically to guarantee a robust separation of application partitions. This architecture puts the complex virtualization program in the user space, so every guest OS uses its own instantiation of the virtualization program. In this case, therefore, there is complete separation between the sandboxes (VMs), thus reducing the risks exhibited in type‐1 hypervisors. An attack on a type‐2 hypervisor can bring down only one virtual box, no more, and cannot bring down the cloud infrastructure as is the case with a type‐1 hypervisor. According to King and Chen, overall, VM‐based rootkits are hard to detect and remove because their state cannot be accessed by software running in the target system. Further, VMBRs support general‐purpose malicious services by allowing such services to run in a separate OS that is protected from the target system (King and Chen 2006).

2.3 Infrastructure Security Analysis in Some Clouds

In this section, we analyze the infrastructure security in Force.com, Amazon AWS, Google App Engine, and Microsoft Azure.

2.3.1 Force.com

Force.com is targeted toward corporate application developers and independent software vendors. Unlike other PaaS offerings, it does not expose developers directly to its own infrastructure. Developers do not provision CPU time, disk, or instances of running OSs. Instead, Force.com provides a custom application platform centered around the relational database, one resembling an application server stack you might be familiar with from working with .NET, J2EE, or LAMP. Although it integrates with other technologies using open standards such as Simple Object Access Protocol (SOAP) and Representational State Transfer (REST), the programming languages and metadata representations used to build applications are proprietary to Force.com. This is unique among the PaaS products but not unreasonable when examined in depth. Force.com operates at a significantly higher level of abstraction than the other PaaS products, promising dramatically higher productivity to developers in return for their investment and trust in a single‐vendor solution.

The Force.com platform architecture includes a database, a workflow engine, and user interface design tools. The platform includes an Eclipse‐based integrated development environment (IDE) and a proprietary programming language called Apex. Apex has Java‐like syntax. The database is a relational database. It is not possible to run any Java or .NET programs on the Force.com platform – developers must use Apex to build applications. Force.com includes a tool called the builder for building web applications quickly. Builder provides a user interface to create objects, fields within objects, and relationships between fields. Once a user creates these objects, the builder automatically creates a web interface with create, update, and delete operations. Using the builder allows developers to build simple to moderate applications without writing any significant amount of code and in a reasonably short time. The platform also provides a rich reporting environment for plotting bar graphs and pie charts (Padhy et al. 2011).

Multitenancy is an abstract concept, an implementation detail of Force.com, but one with tangible benefits for developers. Customers access shared infrastructure, with metadata and data stored in the same logical database. The multitenant architecture of Force.com consists of the following features:

  • Shared infrastructure – Every customer (or tenant) of Force.com shares the same infrastructure. You are assigned a logical environment within the Force.com infrastructure. At first, some might be uncomfortable with the thought of handing their data to a third party where it is comingled with that of competitors. Salesforce's whitepaper on its multitenant technology includes the technical details of how it works and why your data is safe from loss or spontaneous appearance to unauthorized parties (http://developerforce.s3.amazonaws.com/whitepapers/WP_Force‐MT_101508_PRINT.pdf).
  • Single version – There is only one version of the Force.com platform in production. The same platform is used to deliver applications of all sizes and shapes, used by 1–100,000 users, running everything from dog‐grooming businesses to the Japanese national post office.
  • Continuous, zero‐cost improvements – When Force.com is upgraded to include new features or bug fixes, the upgrade is enabled in every customer's logical environment with zero to minimal effort required.

Salesforce.com addresses application security by combining the strengths of multitenancy with modern development and management processes to minimize security vulnerabilities and maximize performance and usability. To achieve high scalability and performance, the database behind Salesforce.com customer relationship management (CRM) products is a single instance shared by thousands of customers. The application ensures that users see only the data to which they have assigned privileges:

  • Every record of the database contains the customer's orgID.
  • During login, the authenticated user is mapped to their org and access privileges according to the sharing model.
  • Every request to the database is formed by the application and is limited to the user's orgID and privileges.
  • Every row returned from the database is then validated against the orgID.
  • An error in the query process does not return any data to the client.

The software development life cycle (SDLC) used by Salesforce.com incorporates security as a core consideration. Before a product can be considered “done,” it must meet security requirements as well as functional requirements. To ensure high‐quality code, security is part of each of the design, code, test, and release phases of the SDLC.

Salesforce can roll out new releases with confidence because it maintains a single version of its infrastructure and can achieve broad test coverage by leveraging tests, code, and configurations from the company's production environment. Customers help maintain and improve Force.com in a systematic, measurable way as a side effect of using it. This deep feedback loop between Force.com and its users is impractical to achieve with on‐premises software.

Salesforce is hosted from dedicated spaces in top‐tier data‐center facilities. Its data centers are low‐profile and designed as anonymous buildings without any company signage. The exterior walls of the facilities are bullet resistant. Concrete bollards are positioned around the facility perimeter to provide further security protection. All facilities maintain multiple transit access routes and are within close proximity to local law enforcement and fire/emergency services. All data centers selected are at core Internet hubs with diverse physically protected routes into the facility. In addition to securing the data center locations, it is critical that all facilities maintain robust critical infrastructure to support Salesforce.com through the following services:

  • Next‐generation uninterruptible power supply (UPS) systems (N + 1)
  • N + 1 cooling infrastructure
  • Fire‐detection and ‐suppression system
  • Multi‐gigabit IP transit for external customer service
  • Access to thousands of global Internet peering points
  • Private peering with key carriers
  • Diverse physically protected secure paths into facilities, for redundancy

All infrastructure is redundant and fault tolerant across components, including network, application servers, and database servers.

The Salesforce CRM suite of applications is powered entirely by Linux and Solaris systems, built with an automated process that ensures compliance to standardized build specifications, including removal of unnecessary processes, accounts, and protocols and use of non‐root accounts to run services. Monitoring and validation of host security includes:

  • File‐integrity monitoring for unexpected changes to the system configuration
  • Malicious software detection on application servers
  • Vulnerability detection and remediation, including internal and external scanning and patching
  • Forwarding of all host logs to a centralized log‐aggregation and event‐correlation server for review and alerting

Access to Salesforce.com is via the public Internet, and connections are secured via Secure Sockets Layer (SSL) / Transport Layer Security (TLS). Salesforce.com contracts with multiple carriers to provide the connectivity and bandwidth to host business‐critical data.

The database in Salesforce.com is hardened according to industry and vendor guidelines, and is accessible only by a limited number of Salesforce.com employees with DBA access. Customers do not have direct database or OS‐level access to the Salesforce environment. Customer passwords for the Salesforce CRM are hashed via SHA 256 before being stored in the database. Customers can specify that certain field types use encryption. These custom fields are encrypted by the application before being saved to the database and can be configured to mask the display of their contents according to user access.

Customer data is mirrored, backed up locally, and also mirrored over an encrypted network (Advanced Encryption Standard [AES] 128) to a 100% full‐scale replica disaster‐recovery data center. Salesforce's information security management system follows ISO 27002 practices and is certified to the ISO 27001 standard. The Computer Security Incident Response Team (CSIRT) runs in parallel with site operations to provide monitoring and incident response. The CSIRT consists of senior‐level security analysts and manages a variety of tools and third‐party resources that include:

  • Intrusion Detection Systems (IDS)
  • Security Event Management (SEM)
  • Threat monitoring
  • Perimeter monitoring
  • External Certificate Authority

Briefly, Force.com is the proven cloud infrastructure that powers Salesforce CRM apps. It provides data encryption using AES‐128 in transfer. It also includes a multitenant kernel, ISO 27001‐certified security, proven reliability, real‐time scalability, a real‐time query optimizer, transparent system status, real‐time upgrades, proven integration, real‐time sandbox environments, integration, and global data centers with built‐in disaster recovery. Force.com's security policies, procedures, and technologies have been validated by the world's most security‐conscious organizations, including some of the largest financial services firms and leading security technology organizations. Customers' data is protected with comprehensive physical security, data encryption, user authentication, and application security as well as the latest standard‐setting security practices and certifications, including:

  • World‐class security specifications.
  • SAS 70 Type II, SOX, ISO 27001, and third‐party vulnerability and SysTrust certifications.
  • Secure point‐to‐point data replication for data backup: backup tapes for customer data never leave the facilities, and no tapes are ever in transport.

2.3.2 Amazon AWS

The AWS global infrastructure includes the facilities, network, hardware, and operational software (e.g. host OS, virtualization software, etc.) that support the provisioning and use of these resources. This global infrastructure is designed and managed according to security best practices as well as a variety of security compliance standards. AWS customers are assured that they are building web architectures on top of some of the most secure computing infrastructure in the world.

AWS compliance enables customers to understand the robust controls in place at AWS to maintain security and data protection in the Cloud. As systems are built on top of AWS cloud infrastructure, compliance responsibilities are shared. By tying together governance‐focused, audit‐friendly service features with applicable compliance or audit standards, AWS compliance enablers build on traditional programs, helping customers to establish and operate in an AWS security control environment. The IT infrastructure that AWS provides to its customers is designed and managed in alignment with security best practices and a variety of IT security standards, including:

  • SOC 1/SSAE 16/ISAE 3402 (formerly SAS 70)
  • SOC 2
  • SOC 3
  • FISMA, DIACAP, and FedRAMP
  • DOD CSM Levels 1–5
  • PCI DSS Level 1
  • ISO 9001/ISO 27001
  • ITAR
  • FIPS 140‐2
  • MTCS Level 3

In addition, the flexibility and control that the AWS platform provides allows customers to deploy solutions that meet several industry‐specific standards, including:

  • Criminal Justice Information Services (CJIS)
  • Cloud Security Alliance (CSA)
  • Family Educational Rights and Privacy Act (FERPA)
  • Health Insurance Portability and Accountability Act (HIPAA)
  • Motion Picture Association of America (MPAA)

AWS's data centers are state of the art, utilizing innovative architectural and engineering approaches. The data centers are housed in nondescript facilities. Physical access is strictly controlled both at the perimeter and at building ingress points by professional security staff utilizing video surveillance, intrusion‐detection systems, and other electronic means. Authorized staff must pass two‐factor authentication a minimum of two times to access data center floors. All visitors and contractors are required to present identification and are signed in and continually escorted by authorized staff.

AWS only provides data center access and information to employees and contractors who have a legitimate business need for such privileges. When an employee no longer has a business need for these privileges, their access is immediately revoked, even if they continue to be an employee of Amazon or AWS. All physical access to data centers by AWS employees is logged and audited routinely.

Automatic fire‐detection and ‐suppression equipment has been installed to reduce risk. The data center electrical power systems are designed to be fully redundant and maintainable without impact to operations, 24 hours a day, and seven days a week. Climate control is required to maintain a constant operating temperature for servers and other hardware, which prevents overheating and reduces the possibility of service outages. Data centers are conditioned to maintain atmospheric conditions at optimal levels. Personnel and systems monitor and control temperature and humidity at appropriate levels. AWS monitors electrical, mechanical, and life‐support systems and equipment so that any issues are immediately identified. Preventative maintenance is performed to maintain the continued operability of equipment. When a storage device has reached the end of its useful life, AWS procedures include a decommissioning process that is designed to prevent customer data from being exposed to unauthorized individuals. AWS uses the techniques detailed in DoD 5220.22‐M (“National Industrial Security Program Operating Manual,” http://www.dss.mil/documents/odaa/nispom2006‐5220.pdf) or NIST 800‐88 (“Guidelines for Media Sanitization,” https://ws680.nist.gov/publication/get_pdf.cfm?pub_id=50819) to destroy data as part of the decommissioning process. All decommissioned magnetic storage devices are degaussed and physically destroyed in accordance with industry‐standard practices.

Amazon's infrastructure has a high level of availability and provides customers with the features to deploy a resilient IT architecture. AWS has designed its systems to tolerate system or hardware failures with minimal customer impact. Data center business continuity management at AWS is under the direction of the Amazon Infrastructure Group. Data centers are built in clusters in various global regions. All data centers are online and serving customers; no data center is “cold.” In case of failure, automated processes move customer data traffic away from the affected area. Core applications are deployed in an N + 1 configuration, so that in the event of a data center failure, there is sufficient capacity to enable traffic to be load‐balanced to the remaining sites.

AWS provides the flexibility to place instances and store data within multiple geographic regions as well as across multiple availability zones within each region. Each availability zone is designed as an independent failure zone. This means availability zones are physically separated within a typical metropolitan region and are located in lower‐risk flood plains (specific flood zone categorization varies by region). Availability zones are all redundantly connected to multiple tier‐1 transit providers. Distributing applications across multiple availability zones provides the ability to remain resilient in the face of most failure modes, including natural disasters or system failures. The Amazon Incident Management team employs industry‐standard diagnostic procedures to drive resolution during business‐impacting events. Staff operators provide 24x7x365 coverage to detect incidents and to manage their impact and resolution.

The AWS network has been architected to permit customers to select the level of security and resiliency appropriate for their workload. To enable customers to build geographically dispersed, fault‐tolerant web architectures with cloud resources, AWS has implemented a world‐class network infrastructure that is carefully monitored and managed. Network devices, including firewall and other boundary devices, are in place to monitor and control communications at the external boundary of the network and at key internal boundaries within the network. These boundary devices employ rule sets, access control lists (ACLs), and configurations to enforce the flow of information to specific information system services.

ACLs, or traffic‐flow policies, are established on each managed interface, and manage and enforce the flow of traffic. ACL policies are approved by Amazon Information Security. These policies are automatically pushed using AWS's ACL‐Manage tool, to help ensure these managed interfaces enforce the most up‐to‐date ACLs. AWS has strategically placed a limited number of access points to the Cloud to allow for more comprehensive monitoring of inbound and outbound communications and network traffic. These customer access points are called API endpoints, and they allow secure HTTP access (HTTPS), which allows customers to establish a secure communication session with storage or compute instances within AWS. To support customers with Federal Information Processing Standard (FIPS) cryptographic requirements, the SSL‐terminating load balancers in AWS GovCloud (U.S.) are FIPS 140‐2‐compliant. In addition, AWS has implemented network devices that are dedicated to managing interfacing communications with Internet service providers (ISPs). AWS employs a redundant connection to more than one communication service at each Internet‐facing edge of the AWS network. These connections each have dedicated network devices.

Customers can connect to an AWS access point via HTTP or HTTPS using Secure Sockets Layer (SSL), a cryptographic protocol that is designed to protect against eavesdropping, tampering, and message forgery.

For customers that require additional layers of network security, AWS offers the Amazon Virtual Private Cloud (VPC), which provides a private subnet within the AWS cloud, and the ability to use an IPsec virtual private network (VPN) device to provide an encrypted tunnel between the Amazon VPC and the customer's data center. For more information about VPC configuration options, refer to the Amazon Virtual Private Cloud (https://aws.amazon.com/vpc).

Logically, the AWS Production network is segregated from the Amazon Corporate network by means of a complex set of network security/segregation devices. AWS developers and administrators on the corporate network who need to access AWS cloud components in order to maintain them must explicitly request access through the AWS ticketing system. All requests are reviewed and approved by the applicable service owner.

Approved AWS personnel then connect to the AWS network through a bastion host that restricts access to network devices and other cloud components, logging all activity for security review. Access to bastion hosts require SSH public‐key authentication for all user accounts on the host. It should be noted that all communications between regions is across public Internet infrastructure; therefore, appropriate encryption methods must be used to protect sensitive data.

AWS utilizes a wide variety of automated monitoring systems to provide a high level of service performance and availability. AWS monitoring tools are designed to detect unusual or unauthorized activities and conditions at ingress and egress communication points. These tools monitor server and network usage, port‐scanning activities, application usage, and unauthorized intrusion attempts. The tools have the ability to set custom performance‐metric thresholds for unusual activity.

Systems within AWS are extensively instrumented to monitor key operational metrics. Alarms are configured to automatically notify operations and management personnel when early warning thresholds are crossed on key operational metrics. An on‐call schedule is used so personnel are always available to respond to operational issues. This includes a pager system so alarms are quickly and reliably communicated to operations personnel.

AWS security‐monitoring tools help identify several types of denial of service (DoS) attacks, including distributed, flooding, and software/logic attacks. When DoS attacks are identified, the AWS incident‐response process is initiated. In addition to the DoS prevention tools, redundant telecommunication providers at each region as well as additional capacity protect against the possibility of DoS attacks.

The AWS network provides significant protection against traditional network security issues, and customers can implement further protection. The following are a few examples:

  • Distributed denial of service (DDoS) attacks – AWS API endpoints are hosted on large, Internet‐scale, world‐class infrastructure that benefits from the same engineering expertise that built Amazon into the world's largest online retailer. Proprietary DDoS mitigation techniques are used. Additionally, AWS's networks are multihomed across a number of providers to achieve Internet access diversity.
  • Man‐in‐the‐middle (MITM) attacks – All of the AWS APIs are available via SSL‐protected endpoints that provide server authentication. Amazon EC2 Amazon Machine Images (AMIs) automatically generate new SSH host certificates on first boot and log them to the instance's console. Customers can then use the secure APIs to call the console and access the host certificates before logging in to the instance for the first time. Amazon encourages customers to use SSL for all interactions with AWS.
  • IP spoofing – Amazon EC2 instances cannot send spoofed network traffic. The AWS‐controlled, host‐based firewall infrastructure will not permit an instance to send traffic with a source IP or MAC address other than its own.
  • Port scanning – Unauthorized port scans by Amazon EC2 customers are a violation of the AWS Acceptable Use Policy. Violations of the AWS Acceptable Use Policy are taken seriously, and every reported violation is investigated. When unauthorized port scanning is detected by AWS, it is stopped and blocked. Port scans of Amazon EC2 instances are generally ineffective because, by default, all inbound ports on Amazon EC2 instances are closed and are only opened by customers. Strict management of security groups by customers can further mitigate the threat of port scans. If customers configure the security group to allow traffic from any source to a specific port, then that specific port will be vulnerable to a port scan. In these cases, customers must use appropriate security measures to protect listening services that may be essential to their application from being discovered by an unauthorized port scan. For example, a web server must clearly have port 80 (HTTP) open to the world, and the administrator of this server is responsible for the security of the HTTP server software, such as Apache. Customers may request permission to conduct vulnerability scans as required to meet specific compliance requirements. These scans must be limited to customer instances and must not violate the AWS Acceptable Use Policy.
  • Packet sniffing by other tenants – It is not possible for a virtual instance running in promiscuous mode to receive or “sniff” traffic that is intended for a different virtual instance. While customers can place their interfaces into promiscuous mode, the hypervisor will not deliver any traffic to them that is not addressed to them. Even two virtual instances that are owned by the same customer located on the same physical host cannot listen to each other's traffic. Attacks such as Address Resolution Protocol (ARP) cache poisoning do not work within Amazon EC2 and Amazon VPC. Amazon EC2 does provide ample protection against one customer inadvertently or maliciously attempting to view another's data, but as a standard practice, customers should encrypt sensitive traffic.

In addition to monitoring, regular vulnerability scans are performed on the host OS, web application, and databases in the AWS environment using a variety of tools. Also, AWS Security teams subscribe to newsfeeds for applicable vendor flaws and proactively monitor vendors' websites and other relevant outlets for new patches.

AWS has established formal policies and procedures to delineate the minimum standards for logical access to AWS platform and infrastructure hosts. AWS conducts criminal background checks, as permitted by law, as part of pre‐employment screening practices for employees and commensurate with the employee's position and level of access. The policies also identify functional responsibilities for the administration of logical access and security.

AWS Security has established a credentials policy with required configurations and expiration intervals. Passwords must be complex and are forced to be changed every 90 days. AWS's development process follows secure software development best practices, which include formal design reviews by the AWS Security team, threat modeling, and completion of a risk assessment. Static code analysis tools are run as part of the standard build process, and all deployed software undergoes recurring penetration testing performed by carefully selected industry experts. Security risk assessment reviews begin during the design phase, and the engagement lasts through launch to ongoing operations.

Routine, emergency, and configuration changes to existing AWS infrastructure are authorized, logged, tested, approved, and documented in accordance with industry norms for similar systems. Updates to AWS's infrastructure are done to minimize any impact on customers and their use of the services. AWS will communicate with customers, either via email or through the AWS Service Health Dashboard, when service use is likely to be adversely affected.

AWS applies a systematic approach to managing change so that alterations to customer‐impacting services are thoroughly reviewed, tested, approved, and well‐communicated. The AWS change‐management process is designed to avoid unintended service disruptions and to maintain the integrity of service to the customer. Changes are typically pushed into production in a phased deployment starting with lowest‐impact areas. Deployments are tested on a single system and closely monitored so impacts can be evaluated. Service owners have a number of configurable metrics that measure the health of the service's upstream dependencies. These metrics are closely monitored, with thresholds and alarming in place. Amazon's Corporate Applications team develops and manages software to automate IT processes for UNIX/Linux hosts in the areas of third‐party software delivery, internally developed software, and configuration management. The Infrastructure team maintains and operates a UNIX/Linux configuration‐management framework to address hardware scalability, availability, auditing, and security management. By centrally managing hosts through the use of automated processes that manage change, Amazon is able to achieve its goals of high availability, repeatability, scalability, security, and disaster recovery. Systems and network engineers monitor the status of these automated tools on a continuous basis, reviewing reports to respond to hosts that fail to obtain or update their configuration and software.

Internally developed configuration‐management software is installed when new hardware is provisioned. These tools are run on all UNIX hosts to validate that they are configured and that software is installed in compliance with standards determined by the role assigned to the host. This configuration‐management software also helps to regularly update packages that are already installed on the host. Only approved personnel enabled through the permissions service may log in to the central configuration‐management servers.

2.3.3 Google App Engine

Google security policies provide a series of threat‐prevention and infrastructure‐management procedures. Malware poses a significant risk to today's IT environments. An effective malware attack can lead to account compromise, data theft, and possibly additional access to a network. Google takes these threats to its networks and its customers very seriously and uses a variety of methods to address malware risks.

This strategy begins with manual and automated scanners that analyze Google's search index for websites that may be vehicles for malware or phishing. (More information about this process is available at https://webmasters.googleblog.com/2008/10/malware‐we‐dont‐need‐no‐stinking.html.) The blacklists produced by these scanning procedures have been incorporated into various web browsers and Google Toolbar to help protect Internet users from suspicious websites and sites that may have become compromised. These tools, available to the public, operate for Google employees as well. In addition, Google makes use of anti‐virus software and proprietary techniques in Gmail, on servers, and on workstations to address malware.

Google's security‐monitoring program analyzes information gathered from internal network traffic, employee actions on systems, and outside knowledge of vulnerabilities. At multiple points across Google's global network, internal traffic is inspected for suspicious behavior, such as the presence of traffic that might indicate botnet connections. This analysis is performed using a combination of open source and commercial tools for traffic capture and parsing. A proprietary correlation system built on top of Google technology also supports this analysis. Network analysis is supplemented by examining system logs to identify unusual behavior, such as unexpected activity in former employees' accounts or attempted access of customer data. Google Security engineers place standing search alerts on public data repositories to look for security incidents that might affect the company's infrastructure. They review inbound security reports and monitor public mailing lists, blog posts, and web bulletin‐board systems. Automated network analysis helps determine when an unknown threat may exist and escalates to Google Security staff. Network analysis is supplemented by automated analysis of system logs.

Google employs a team that has the responsibility to manage vulnerabilities in a timely manner. The Google Security Team scans for security threats using commercial and in‐house‐developed tools, automated and manual penetration efforts, quality assurance (QA) processes, software security reviews, and external audits. The vulnerability‐management team is responsible for tracking and following up on vulnerabilities. Once a legitimate vulnerability requiring remediation has been identified by the Security Team, it is logged, prioritized according to severity, and assigned an owner. The vulnerability‐management team tracks such issues and follows up until they can verify that the vulnerability has been remediated. Google also maintains relationships and interfaces with members of the security research community to track reported issues in Google services and open source tools. Under Google's Vulnerability Reward Program (http://www.google.com/about/company/rewardprogram.html), security researches receive rewards for the submission of valid reports of security vulnerabilities in Google services.

Google has an incident‐management process for security events that may affect the confidentiality, integrity, or availability of its systems or data. This process specifies courses of action and procedures for notification, escalation, mitigation, and documentation. Staff are trained in forensics and handling evidence in preparation for an event, including the use of third‐party and proprietary tools. Testing of incident response plans is performed for identified areas, such as systems that store sensitive customer information. These tests take into consideration a variety of scenarios, including insider threats and software vulnerabilities.

The Google Security Team is available 24x7 to all employees. When an information security incident occurs, Google's Security staff respond by logging and prioritizing the incident according to its severity. Events that directly impact customers are treated with the highest priority. An individual or team is assigned to remediating the problem and enlisting the help of product and subject experts as appropriate. Google Security engineers conduct post‐mortem investigations when necessary to determine the root cause for single events and trends spanning multiple events over time, and to develop new strategies to help prevent recurrence of similar incidents.

Google employs multiple layers of defense to help protect the network perimeter from external attacks. Only authorized services and protocols that meet Google's security requirements are permitted to traverse the company's network. Unauthorized packets are automatically dropped. Google's network security strategy is composed of the following elements:

  • Control of the size and make‐up of the network perimeter. Enforcement of network segregation using industry standard firewall and ACL technology.
  • Management of network firewall and ACL rules that employs change management, peer review, and automated testing.
  • Restricting access to networked devices to authorized personnel.
  • Routing of all external traffic through custom front‐end servers that help detect and stop malicious requests.
  • Creating internal aggregation points to support better monitoring.
  • Examination of logs for exploitation of programming errors (e.g. XSS) and generating high‐priority alerts if an event is found.

Google provides many services that use Hypertext Transfer Protocol Secure (HTTPS) for more secure browser connections. Services such as Gmail, Google Search, and Google+ support HTTPS by default for users who are signed into their Google accounts. Information sent via HTTPS is encrypted from the time it leaves Google until it is received by the recipient's computer.

Based on a proprietary design, Google's production servers are based on a version of Linux that has been customized to include only the components necessary to run Google applications, such as those services required to administer the system and serve user traffic. The system is designed for Google to be able to maintain control over the entire hardware and software stack and support a secure application environment. Google's production servers are built on a standard OS, and security fixes are uniformly deployed to the company's entire infrastructure. This homogeneous environment is maintained by proprietary software that continually monitors systems for binary modifications. If a modification is found that differs from the standard Google image, the system is automatically returned to its official state. These automated, self‐healing mechanisms are designed to enable Google to monitor and remediate destabilizing events, receive notifications about incidents, and slow down potential compromise on the network. Using a change‐management system to provide a centralized mechanism for registering, approving, and tracking changes that impact all systems, Google reduces the risks associated with making unauthorized modifications to the standard Google OS.

Google App Engine provides DoS protection and SSL on all App Engine applications. Hardware security features are undisclosed, but all have successfully completed a SAS 70 Type II audit.

2.3.4 Microsoft Azure

Microsoft Azure can provide businesses with on‐demand infrastructure that can scale and adapt to changing business needs. Customers can quickly deploy new VMs in minutes, and with pay‐as‐you‐go billing, they aren't penalized when they need to reconfigure VMs. Microsoft Azure VMs even offer a gallery of preconfigured VM images to choose from so customers can get started as quickly as possible. Customers can also upload or download virtual disks, load‐balance VMs, and integrate VMs into their on‐premises environment using virtual networks.

Microsoft Azure is Microsoft's cloud computing platform and infrastructure for building, deploying, and managing applications and services through its global network of data centers.

Virtualization in Azure is based on the Hyper‐V hypervisor, and supported OSs are Windows, some Linux distributions (SUSE Linux Enterprise Server, Red Hat, Enterprise Linux versions 5.2–6.1, and CentOS 5.2–6.2), as well as UNIX Free BSD. Hyper‐V is a hypervisor‐based virtualization technology that was first introduced for x64 versions of Windows Server 2008. Isolation is supported in terms of logical units of isolation, called partitions. Host nodes run root or parent partitions enabled by supported version of Windows Server Operating System (2008, 2008 R2, or 2012). The root partition is the single one that has direct access to the hardware devices, and it creates child partitions by API calls. Improvements to Windows Server 2012 Hyper‐V (http://download.microsoft.com/download/a/2/7/a27f60c3‐5113‐494a‐9215‐d02a8abcfd6b/windows_server_2012_r2_server_virtualization_white_paper.pdf) include the following:

  • Multitenant VM isolation through private virtual LANs (PVLANs). A PVLAN is a technique used to isolate VMs that share a VLAN. Isolated mode means ports cannot exchange packets with each other at layer 2. Promiscuous ports can exchange packets with any other port on the same primary VLAN ID. Community ports on the same VLAN ID can exchange packets with each other at layer 2.
  • Protection against a malicious VM stealing IP addresses from other VMs using ARP spoofing, provided by Hyper‐V Extensible Switch.
  • Protection against Dynamic Host Configuration Protocol (DHCP) snooping and DHCP guard, by configuring ports that can have DHCP servers connected to them.
  • Isolation and metering though virtual port ACLs that enable customers to configure which MAC addresses can (and cannot) connect to a VM.
  • Ability to trunk traditional VLANs to VMs. Hyper‐V Extensible Switch trunk mode allows traffic from multiple VLANs to be directed to a single network adapter in a VM that could previously receive traffic only from one VLAN.
  • Ability to monitor traffic from specific ports flowing through specific VMs on the switch.

An interface scripting environment allows control and automated deployment and workload management in Microsoft Azure. Authentication is over SSL for security, and it can use the user's certificate or generate a new one.

Microsoft Azure Virtual Network provides the following capabilities:

  • Creation and management of virtual private networks in Microsoft Azure with the user's defined address space to connect with cloud services (PaaS) and VMs. The address space follows the RFC 1918 specification, and public addresses are not allowed in the virtual network.
  • Cross‐site connectivity over IPsec VPN between the virtual network and on‐premises network to enable a hybrid cloud and securely extend the on‐premises data center (https://docs.microsoft.com/en‐us/azure/vpn‐gateway/vpn‐gateway‐about‐vpn‐devices). This feature can be enabled by a VPN device or use the Routing and Remote Access Service (RRAS) on Windows Server 2012. Microsoft Azure has validated a set of standard site‐to‐site S2S VPN devices in partnership with device vendors, in order to ensure its compatibility.

Microsoft Azure defined site‐to‐site VPNs can be either static or dynamic:

  • Static routing VPNs are policy based. Policy‐based VPNs encrypt and route packets through an interface based on a customer‐defined policy. The policy is usually defined as an access list.
  • Dynamic routing VPNs are route based. Route‐based VPNs depend on a tunnel interface specifically created for forwarding packets. Any packet arriving on the tunnel interface is forwarded through the VPN connection.

Microsoft Azure recommends using dynamic routing VPNs when possible. Different features are available for dynamic and static routing VPNs.

Microsoft Azure blob storage is used to store unstructured binary and text data. This data can be accessed over HTTP or HTTPS. Based on the user's preferences, data can be encrypted through the .NET Cryptographic Service Providers libraries. Through them, developers can implement encryption, hashing, and key management for storage and transmitted data. Azure Drive is a feature of Azure that provides access to data contained in an NTFS‐formatted virtual hard disk (VHD) persisted as a page blob in Azure Storage. A single Azure instance can mount a page blob for read/write access as an Azure Drive. However, multiple Azure instances can mount a snapshot of a page blob for read‐only access as an Azure Drive. The Azure Storage blob lease facility is used to prevent more than one instance at a time from mounting the page blob as an Azure Drive. When mounting a drive, the application has to specify credentials that allow it to access the page blob in the Microsoft Azure blob service. Microsoft Azure Drive supports two authorization schemes, account and key, as well as Shared Access Signatures (SAS).

The Azure Storage Service supports association of access permissions with a container through public access control. This allows public read access to the container and the blobs in it or public read access only to the blobs in the container and not to the container itself. The latter would, for example, prohibit unauthenticated listing of all the blobs in the container. The Azure Storage Service also supports shared‐access signatures, which can be used to provide a time‐limited token allowing unauthenticated users to access a container or the blobs in it. Shared access can be further managed through container‐level access policy.

By default, storage accounts are configured for geo‐redundant storage (GRS), meaning blob data is replicated both within the primary location and to a location hundreds of miles away (geo‐replication).

In addition, durability for Microsoft Azure Storage is achieved through replication of data. The replication mechanism used is Distributed File System (DFS), where data is spread out over a number of storage nodes. The DFS layer stores the data in what are called extents. This is the unit of storage on disk and the unit of replication, where each extent is replicated multiple times. Typical extent sizes range from approximately 100 MB to 1 GB. When storing a blob in a blob container, entities in a table, or messages in a queue, the persistent data uses one or more extents.

Microsoft Azure offers SQL Database, which is based on Microsoft SQL Server. It offers two types of access control, SQL authentication and a server‐side firewall that restricts access by IP address:

  • SQL authentication – SQL Database only supports SQL Server authentication: the user's accounts with strong passwords and configured with specific rights.
  • SQL Database firewall – Lets the user allow or prevent connections by specifying IP addresses or ranges of IPs.

Along with access control, SQL Database only allows secure connections via SQL Server's protocol encryption through the SSL protocol. SQL Database supports Transparent Data Encryption (TDE). It performs real‐time I/O encryption and decryption of data and log files. For encryption, it uses a database encryption key (DEK), stored in the database boot record for availability during recovery. TDE protects data stored in the database and enables software developers to encrypt data by using AES and Triple Data Encryption Algorithm (3DES) encryption algorithms without changing existing applications.

Microsoft Azure provides the following (Singh 2015):

  • Identity and access – Monitor access patterns to identify and mitigate potential threats. Help prevent unauthorized access with Azure multifactor authentication. End users have self‐service identity management.
  • Network security – Azure VMs and data are isolated from undesirable traffic and users. However, customers can access them through encrypted or private connections. Firewalled and partitioned networks help protect against unwanted traffic from the Internet. Customers can manage VMs with encrypted remote desktops and Windows PowerShell sessions, and can keep traffic off the Internet by using Azure ExpressRoute, a private fiber link between the client and Azure.
  • Data protection – Azure provides technology safeguards such as encryption, and operational processes about data destruction maintain confidentiality. Encryption is used to help secure data in transit between data centers and clients, as well as between and at Microsoft data centers, and clients can choose to implement additional encryption using a range of approaches. If customers delete data or leave Azure, Microsoft follows strict industry standards that call for overwriting storage resources before reuse, as well as physically disposing of decommissioned hardware.
  • Data privacy – Microsoft allows to specify the geographic areas where client's data is stored. Furthermore, data can be replicated within a geographic area for redundancy. To limit Microsoft access to and use of customer data, Microsoft strictly controls and permits access only as necessary to provide or troubleshoot service. Client data is never used for advertising purposes.
  • Threat defense – Integrated deployment systems manage security updates for Microsoft software, and clients can apply update‐management processes to VMs. Microsoft provides continuous monitoring and analysis of traffic to reveal anomalies and threats. Forensic tools dissect attacks, and clients can implement logging to aid analysis. Clients can also conduct penetration testing of applications being run in Azure (legal permissions are required).
  • Compliance programs – These include ISO 27001, SOC1, SOC2, FedRAMP, UK G‐Cloud, PCI DSS, and HIPAA.

2.4 Protecting Cloud Infrastructure

In this section, we discuss several ways to protect the cloud infrastructure (Faatz and Pizette 2010).

2.4.1 Software Maintenance and Patching Vulnerabilities

Protecting software infrastructure in the Cloud is an essential activity for maintaining an appropriate security posture. For cloud providers and traditional IT alike, it involves activities such as securely configuring OSs and network devices, ensuring software patches are up to date, and tracking the discovery of new vulnerabilities.

The good news in terms of basic infrastructure security, such as configuration and patching, is that cloud providers may do a better job than most client organizations currently accomplish. The European Network and Information Security Agency (ENISA) observes, “… security measures are cheaper when implemented on a larger scale. Therefore, the same amount of investment in security buys better protection” (Catteddu and Hogben 2012. Large cloud providers will benefit from these economies of scale.

Cloud providers have an additional benefit: their systems are likely to be homogeneous, which is fundamental to delivering commodity resources on demand. Hence, a cloud provider can configure every server identically. Software updates can be deployed rapidly across the provider's infrastructure. As a contrasting example, suppose that a large federal agency has observed that each of its servers is unique; every server has at least one deviation from defined configuration standards. This heterogeneity adds to the complexity of maintaining infrastructure security (Faatz and Pizette 2010).

Homogeneity also has a potential downside: it ensures that the entire infrastructure has the same vulnerabilities. An attack that exploits an infrastructure vulnerability will affect all systems in a homogeneous cloud. The characteristic that makes routine maintenance easier may increase the impact of a targeted attack. A potential area for future research would be to employ an instance of a completely different technology stack for the express purpose of validating the integrity of the initial homogeneous infrastructure.

Although it may be easier for cloud providers to maintain infrastructure security, government clients should ensure that they understand the provider's standards for configuring and maintaining the infrastructure used to deliver cloud services.

While some security information is proprietary and sensitive, many providers are starting to share more information in response to customer needs. For example, Google recently published a set of white papers providing general information about its security operations and procedures (https://services.google.com/fh/files/misc/security_whitepapers_march2018.pdf).

2.4.2 The Technology Stack

The hardware and software stack, whether it is commercial off‐the‐shelf, government off‐the‐shelf, or proprietary, has an impact on the soundness of the provider's security practices and how readily the government can understand them. For example, Google and some other providers use proprietary hardware and software to implement their clouds. The proprietary cloud infrastructure may be as secure as or more secure than the cloud infrastructure constructed of commodity hardware and commercial software; however, there is no standard for comparison. If a cloud vendor is using a proprietary infrastructure, it may be difficult for the government to assess the platform's vulnerabilities and determine security best practices. There are no commonly accepted secure configurations standards and no public source of vulnerability information for these proprietary infrastructures. As a potential mitigation and best practice, government clients should understand the provider's disclosure policy regarding known vulnerabilities, administrative practices, security events, etc. They also should have relevant reporting contractually specified.

Similar to the community and public cloud providers, government organizations implementing private cloud solutions may find it easier and faster to maintain secure configurations and timely patching. Unlike physical servers, virtual servers do not have to be configured or patched individually. Instead, the VM images are configured and patched. Measuring compliance also can be simplified by checking the VM images rather than running measurement agents on each virtual server.

2.4.3 Disaster Recovery

In addition to maintaining the currency of software and expeditiously plugging vulnerabilities, cloud computing providers must be able to quickly recover from disaster events. For the government client organization, cloud computing can both simplify and complicate disaster‐recovery planning. Because most major cloud providers operate several geographically dispersed data centers, a single natural disaster is unlikely to affect all centers. For example, Amazon EC2 describes its geographic resiliency as follows: “By launching instances in separate Availability Zones, you can protect your applications from failure of a single location” (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using‐regions‐availability‐zones.html). Some level of disaster recovery is inherent in a well‐designed, large‐scale, cloud computing infrastructure.

That said, circumstances might force a cloud provider to discontinue operations. Currently, most cloud service offerings are unique to each provider and may not be easily portable. An application built for the Google Apps platform will not run on Microsoft's Azure platform. Hence, clients may need to develop alternative hosting strategies for applications deployed to the Cloud. If dictated by system requirements for uptime and availability, organizations can develop processes to continue operations without access to community or public cloud‐based applications.

For a private cloud, technologies such as virtualization can be employed to help with disaster recovery. Given that virtualized images frequently can be deployed independent of the physical hardware, virtualization provides an inherent continuity of operations capability (i.e. virtualized applications can be easily moved from one data center to another).

2.4.4 Monitoring and Defending Infrastructure

The challenge of monitoring and defending cloud‐based systems depends on the service model and may increase due to shared control of the IT stack. Monitoring and defending systems consists of detecting and responding to inappropriate or unauthorized use of information or computing resources. Much like Microsoft Windows, which has been the dominant desktop OS and target of choice for malware, large public clouds and community clouds also are high‐value targets. Penetrating the substrate of a public or community cloud can provide a foothold from which to attack the applications of all the organizations running on the Cloud.

Audit trails from network devices, OSs, and applications are the first source of information used to monitor systems and detect malicious activity. Some or all of these sources may not be available to a cloud client. With SaaS, all audit trails are collected by the cloud provider. With PaaS, application audit trails may be captured by the client, but OS and network audit trails are captured by the provider. With IaaS, a government organization may capture audit trails from the virtual network, virtual OSs, and applications. The provider collects the audit trails for the physical network and the virtualization layer. Correlation of events across provider‐hosted VMs may be difficult, and the ability to place intrusion‐detection sensors in the VMs may be similarly constrained.

To date, most cloud providers have focused on monitoring and defending the physical resources they control. Unlike their clients, cloud providers have the technical ability to collect audit‐trail information and place intrusion‐detection sensors in the infrastructure where their clients cannot. Although they can do this, cloud providers may not be willing or able to share that data. In clouds that host multiple tenants, the provider would need to protect the privacy of all its customers, which complicates the ability to share information. As noted by Buck and Hanf, SLAs that specify the exact types of information that will be shared are essential (Buck and Hanf 2010).

2.4.5 Incident Response Team

The government client's incident response team will need to learn the response capabilities offered by the cloud provider, ensure appropriate security SLAs are in place, and develop new response procedures that couple the cloud provider information with its own data. Given the difficulty of obtaining provider infrastructure information, a government client's incident response team may need to rethink how it detects some types of malicious activity. For example, an incident response team that provides proactive services such as vulnerability scanning may not be allowed to perform these functions on systems and applications deployed in the Cloud. A cloud provider's terms of use may prohibit these activities, as it would be difficult to distinguish legitimate client‐scanning actions from malicious activities. Standard incident response actions may not be possible in the Cloud. For example, a government client's incident response team that proactively deletes known malicious e‐mail from users' inboxes may not have this ability in a cloud‐based SaaS email system. Given these challenges, it is essential that the appropriate contractual relationship with SLAs be established.

If the organization is creating a private cloud, there are new challenges that are different from many of the community and public cloud issues. The virtualization layer presents a new attack vector, and many components (e.g. switches, firewalls, intrusion‐detection devices) within the IT infrastructure may become virtualized. The organization's security operations staff must learn how to safely deploy and administer the virtualization software, and how to configure, monitor, and correlate the data from the new virtual devices.

While cloud computing may make some aspects of incident detection and response more complex, it has the potential to simplify some aspects of forensics. When a physical computer is compromised, a forensic analyst's first task is to copy the state of the computer quickly and accurately. Capturing and storing state quickly is a fundamental capability of many IaaS clouds. Instead of needing special‐purpose hardware and tools to capture the contents of system memory and copy disks, the forensic analyst uses the inherent capabilities of the virtualization layer in the IaaS Cloud. Leveraging this capability requires the incident response team to develop procedures for capturing and using this state information and, in the case of community and public clouds, develop and maintain a working relationship with the cloud provider.

2.4.6 Malicious Insiders

Clouds, whether public, community, or private, create an opportunity for malicious insiders. All three cloud deployment models create a new class of highly privileged insiders: cloud infrastructure administrators. OSs have long had privileged users such as the UNIX root user and the Microsoft Windows administrator. The risk associated with these users often has been managed using a variety of techniques (e.g. limiting the number of platforms on which a person can have privileged access). The cloud approach to providing computing resources may create users with broad privileged access to the entire underlying cloud infrastructure. Given this risk, mitigating controls and access restrictions must be maintained, because an unchecked, malicious cloud infrastructure administrator has the potential to inflict significant damage. For public and community clouds, it is important to understand how the vendor reduces the risk posed by cloud administrators. Organizations operating private clouds need to consider what operational and monitoring controls can be used to reduce this risk.

Public and community IaaS clouds significantly increase the number of people who are insiders or “near insiders.” Multiple organizations will have VMs running on the same physical machine. Administrators of these neighbor VMs will have privileged access to those VMs – an excellent starting point for launching an attack. Using Amazon's EC2 IaaS offering [16], demonstrated the ability to map the cloud infrastructure and locate specific target VMs (Ristenpart et al. 2009). Having located the target, the researchers were able to reliably place a VM that they controlled on the same physical server. This capability enables a variety of VM‐escape or side‐channel attacks to compromise the target. Hence, in multitenant IaaS, neighbors are similar to malicious insiders.

The key considerations identified in this section for monitoring and protecting computing and communications infrastructure in cloud deployments are as follows:

  • Large public clouds are high‐value targets.
  • Incident response teams must develop procedures (with contractual backing) for working with a cloud provider.
  • Cloud infrastructure simplifies forensic capture of system state.
  • Cloud virtualization technology may create a new class of highly privileged users with broad access to the cloud infrastructure.
  • Cloud neighbors pose a threat similar to malicious insiders.
  • Cloud service providers, through their homogeneous environments and economies of scale, may be able to provide better infrastructure security than many government organizations currently achieve.
  • Assessing the security posture of providers is complicated if proprietary hardware or software is used.
  • Many large‐scale cloud providers operate multiple, geographically dispersed data centers.
  • Unique cloud service offerings that are not easily portable make recovery from provider failure challenging.

2.5 Conclusion

There are many security issues related to cloud computing. Some reflect traditional web application, networking, and data‐hosting problems and others are related to cloud‐specific characteristics such as virtualization and multitenancy. In general, the security concerns of the cloud environment can be categorized into three groups: identity, information, and infrastructure. Cloud infrastructure security is a critical aspect of cloud security, and any attacks to the cloud infrastructure will cause significant service disruption. In this chapter, we briefly explained identity and information security, and discussed how cloud infrastructure security is investigated at three levels: network, host, and application. We also discussed other security issues to provide a high level of insight to cloud security.

References

  1. Buck, Kevin and Hanf, Diane. (2010). Cloud SLA considerations for the government consumer. MITRE Corporation technical paper.
  2. Catteddu, Daniele and Hogben, G. (2012). Cloud computing: benefits, risks and recommendations for information security. European Network and Information Security Agency technical report.
  3. Faatz, D and Pizette, L. (2010). Information security in the clouds. MITRE Corporation technical paper.
  4. Goldberg, Robert P. (1973). Architectural principles for virtual computer systems. PhD thesis. Harvard University.
  5. Keller, Eric, Szefer, Jakub, Rexford, Jennifer et al. (2010). NoHype: virtualized cloud infrastructure without the virtualization. Paper presented at the ACM SIGARCH Computer Architecture News.
  6. King, Samuel T and Chen, Peter M. (2006). SubVirt: implementing malware with virtual machines. Paper presented at the 2006 IEEE Symposium on Security and Privacy.
  7. Kizza, J.M. and Yang, L. (2014). Is the cloud the future of computing? In: Security, Trust, and Regulatory Aspects of Cloud Computing in Business Environments, 57. IGI Global.
  8. Ko, Ryan KL, Jagadpramana, Peter, Mowbray, Miranda et al. (2011). TrustCloud: A framework for accountability and trust in cloud computing. Paper presented at the 2011 IEEE World Congress on Services (SERVICES).
  9. MacDonald, Neil. (2011). Yes, hypervisors are vulnerable. Gartner Blog.
  10. Mather, T., Kumaraswamy, S., and Latif, S. (2009). Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance. O'Reilly Media, Inc.
  11. Northcutt, S., Skoudis, E., Sachs, M. et al. (2008). Top Ten Cyber Security Menaces for 2008. SANS Institute: SANS Press Room.
  12. Padhy, R.P., Patra, M.R., and Satapathy, S.C. (2011). X‐as‐a‐Service: cloud computing with Google App Engine, Amazon Web Services, Microsoft Azure and Force.com. International Journal of Computer Science and Telecommunications 2 (9).
  13. Ristenpart, Thomas, Tromer, Eran, Shacham, Hovav et al. (2009). Hey, you, get off of my cloud: exploring information leakage in third‐party compute clouds. Paper presented at the 16th ACM conference on Computer and Communications Security.
  14. Singh, T. (2015). Security in public cloud offerings: issues and a comparative study of Amazon Web Services and Microsoft Azure. International journal of Science Technology & Management (IJSTM) 6 (1).
  15. Violino, B. (2010). Five cloud security trends experts see for 2011. CSO Security and Risk.
  16. Xiao, Z. and Xiao, Y. (2013). Security and privacy in cloud computing. IEEE Communications Surveys & Tutorials 15 (2): 843–859.
  17. Young, R. (2010). IBM X‐force mid‐year trend & risk report. IBM technical report.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset