Chapter 8

Cloud Security Alliance Research

Abstract

As mentioned earlier, our intention is to provide a single reference for all Cloud Security Alliance (CSA) research. This chapter will provide the readers with an overview of the various working groups within the CSA and details of their current findings.

Keywords

Big Data; CloudCERT; CloudTrust; Governance; Innovation initiative
Information in this chapter
▪ Big Data Working Group
▪ Cloud Data Governance
▪ CloudCERT
▪ CloudTrust Protocol
▪ Enterprise Architecture Working Group
▪ Incident Management and Forensics
▪ Innovation Initiative
▪ Security as a Service
▪ Security guidance for critical areas of focus in cloud computing
For those who have tracked the progress of this book, you may have noticed at least three different iterations regarding its content. These changes were caused by simply the desire to try and include as much of the excellent work that Cloud Security Alliance (CSA) volunteers and staff have contributed, and multiple iterations of the table of contents in the attempt to squeeze more in. Obviously, one thing that is very clear is that while every effort was made to include as much as we could, there is still a considerable amount of excellent work that has not been referenced thus far.
We (the authors) recognized very early on that if we attempted to dedicate a chapter to each of the CSA working groups, that the book would likely never be completed and would likely resemble the cloud version of the Encyclopedia Brittanica! Even if we were able to draw a line under the work, the likelihood is that the content would be so substantial that it would have to be released as volumes, just like those physical Encyclopedias gathering dust on our bookshelves today.
This, of course, is testimony to the dedicated support from the CSA family, the fact that so many wonderful and talented individuals have dedicated their time and expertise in the development of content that is making our digital world a safer place. Subsequently the purpose of this chapter is to incorporate areas of research that did not feature within the preceding text, and while every attempt was made to summarize all working groups and deliverables, we recognize that this was not possible. The readers are therefore strongly encouraged to not only see the following text as a reference guide, but also visit the CSA research site for a broader understanding of all areas currently being worked on, and where the readers can identify the area of research that aligns with their interest and expertise, and contribute to future deliverables.

Big Data Working Group

At the beginning of this book, it was stated that cloud computing is one of the hottest topics within the technology industry, it is however not alone and must surely be joined by the term “Big Data.” This term, much like cloud computing, suffers from multiple sources offering varying definitions. We, the authors were then presented with a number of options in determining which definition to present within this text. According to analyst firm Gartner,

“Big Data” is high-volume, -velocity, and -variety information asset that demands cost-effective, innovative forms of information processing for enhanced insight and decision-making1.

An alternative definition was cited in the U.S. White House in their May 2014 publication entitled “Big Data: Seizing Opportunities Preserving values2”; “data is now available faster, has greater coverage and scope, and includes new types of observations and measurements that previously were not available.” In practical terms, Big Data provides remarkable opportunities that are being realized by many organizations across both public and private sectors. One such example was realized by the Los Angeles and Santa Cruz police departments, who with a private sector organization used software to predict those crimes that are likely to occur. By providing crime data, the software was able to pinpoint the likelihood of a crime occurring within a 500-ft radius; this resulted in a 33% decrease in burglaries and 21% decrease in violent crimes where the software was used.3 This particular approach is known as predictive policing, allowing policemen on the beat to focus on those areas that are likely to result in more crime.
Despite such positive benefits, there have been significant security and privacy concerns associated with Big Data. In the white paper by Robert H. Sloan and Richard Warner entitled “Big Data and the ‘New’ Privacy Tradeoff,” they cite, “both the potential benefits and risks from Big Data analysis are so much larger than anything we have seen before.” To understand some of these risks, the Big Data Working Group (BDWG) within the CSA was established in order to identify techniques that are scalable to address the security and privacy issues. The intent of the working group is to develop deliverables that can be used as best practice for security and privacy challenges for Big Data. Such deliverables also include development of relationships that can assist in the development of appropriate standards and appropriate research.
Before presenting the deliverables produced by the working group thus far, it is worth noting the role cloud computing plays within “Big Data.” In the CSA research entitled “Top Ten Big Data Security and Privacy Challenges,4” “Big Data is cheaply and easily accessible to organizations large and small through public cloud infrastructure. Software infrastructures such as Hadoop enable developers to easily leverage thousands of computing nodes to perform data-parallel computing. Combined with the ability to buy computing power on-demand from public cloud providers, such developments greatly accelerate the adoption of Big Data mining methodologies.” Cloud computing therefore can be seen as a great enabler for the broader adoption of Big Data, and “subsequently new security challenges have arisen from the coupling of Big Data with public cloud environments.”

BDWG Research Deliverables

The preceding paragraphs cited the various security and privacy concerns associated with Big Data, these were presented as part of the deliverables from the BDWG through the Top Ten Big Data Security and Privacy Challenges.

Top Ten Big Data Security and Privacy Challenges

Published in 2013, the Top Ten Challenges present the following as the top 10 challenges to Big Data security and privacy:
1. Secure computations in distributed programming frameworks: Frameworks such as MapReduce are referred to as distributed computational frameworks because they allow the processing of large amounts of data across a distributed environment. It is comprised of two parts: (1) mapper, which distributes the work to the various nodes within the framework and (2) reducer, which combines the work collates and resolves the various results. Security and privacy risks arise with untrusted mappers that have the ability to impact confidentiality (by snooping on requests), but also integrity through altering scripts or the results. To address these risks, there exist two models to maintain trust between mappers: (1) authenticate each mapper to establish an initial trust relationship and repeat this process periodically and (2) mandatory access control to ensure that access to files only aligns with a predefined security policy.
2. Security best practices for nonrelational data stores: Data stores for nonrelational data may introduce security challenges due to their lack of capability. This includes the following scenarios:
a. Transactional integrity: Nonrelational data stores such as NoSQL experience challenges in achieving transactional integrity; introducing such validation will result in degradation of performance and scalability. To address these trade-offs, it is possible to leverage techniques such as Architectural Trade-off Analysis Method that can be used to evaluate the proposed integrity constraints without significantly impacting performance.
b. Lax authentication mechanisms: Both the authentication and password storage mechanisms for NoSQL are not considered strong. Subsequently risks exist that would allow an attacker to carry out a replay attack, where legitimate authentication is captured and replayed (therefore allowing the attacker access to resources). Equally, the REST communication protocol, which is based on HTTP, is prone to cross-site scripting and injection attacks. Furthermore, there is no support from third-party modules that would support alternate authentication modules.
c. Inefficient authorization mechanisms: There exist multiple authorization techniques across various NoSQL solutions. Many, however, only apply authorization to the higher layers and also do not support role-based access control.
d. Susceptibility to injection attacks: NoSQL is susceptible to a number of injection attacks that would, for example, allow the attacker to inject columns into a database of their choosing. This not only impacts the integrity of the data, but also the potential of denial-of-service (DOS) attack impacting the availability of the database.
e. Lack of consistency: Users are not provided with consistent results because each node may not be synchronized with the node holding the latest image.
f. Insider attacks: The combination of the above, as well as the implementation of poor security mechanisms (e.g., security logging) would allow potential insider attacks to be conducted without detection.
3. Secure data storage and transactions logs: While the data and transaction logs can be stored and managed across various storage media manually, the volume of such data means that automated solutions are becoming more prevalent. Subsequently such automated solutions may not track where the data are actually stored introducing challenges in the application of security. An example would be where data that are not often used is stored on cheaper storage, however if this cheaper tier does not have the same security controls and the data are sensitive, then a risk is introduced. Subsequently organizations should ensure that their storage strategy not only considers the retrieval rate for such data, but also the sensitivity of data.
4. End-point input validation/filtering: A Big Data implementation will likely collect data from a multitude of sources but the challenge will be attributing the level of trust associated with the data provided from such sources. To illustrate the multitude sources, consider the following from the U.S. White House publication cited earlier, “The advent of the more Internet-enabled devices and sensors expands the capacity to collect data from physical entities, including sensors and radio-frequency identification (RFID) chips. Personal location data can come from GPS chips, cell-tower triangulation of mobile devices, mapping of wireless networks, and in-person payments.” A threat exists where a malicious attacker is able to manipulate the data provided by the sensor(s), impersonate a legitimate sensor, manipulate the input sources of the sensed data (for example, if a sensor collects data about temperature, it will be possible to artificially change the temperature within the vicinity of the sensor, and that is ultimately submitted), or manipulate the data transmitted by a sensor.
5. Real-time security monitoring: A key use case for Big Data is its ability to assist in the security of other systems. This particular example includes both the monitoring of the Big Data infrastructure as well as using this same infrastructure for security monitoring; for example, a cloud service provider could leverage Big Data to analyze security alerts in real time and subsequently reduce the number of false positives within its environment. The challenge, however, is that the sheer volume of alerts go beyond the capacity for human analysis. With Big Data, these alerts will likely increase even further, and place greater pressure on already overstretched security teams. In the White Paper published by security firm RSA entitled “RSA-Pivotal Security Big Data Reference architecture5,” the use of Big Data for security monitoring can address the following requirements:
a. Better visibility from networks to servers, and applications to end points
b. More contextual analysis to help prioritize issues more effectively
c. Actionable intelligence from diverse sources, both internal and external, to tell the system what to look for in an automated way, and respond quicker
If a public cloud is used to support Big Data security monitoring, it is important to consider the risks that include the security of the public cloud, the monitoring applications itself, and the security of the input sources. It is worth noting the recent publication by the CSA BDWG entitled “Big Data Analytics for security intelligence6” for further information.
6. Scalable and composable privacy-preserving data mining and analytics: The use of Big Data can lead to privacy risks being realized. These risks can be either through data leakage, where, for example, an insider may intentionally release the data, or indeed through an authorized third party. Alternatively, another consideration is the release of data for research purposes. Where large data volumes are concerned, there is a risk that even if the data are anonymized, it may be possible to infer the data subject. For example, consider a health care example; while the name, house number, and zip code are obfuscated, it is possible for a medical professional to identify the data subject within a given town because only one person has a particular combination of medical conditions. To mitigate these risks, organizations should consider the use of security controls such as encryption, access controls, and separation of duties. Another approach is the use of pseudonymization, where identifying fields are replaced with artificial fields (pseudonyms). According to Neelie Kroes, the EU Commissioner responsible for the Digital Agenda, using pseudonymization means that “Companies would be able to process the data on grounds of legitimate interest, rather than consent. That could make all the positive difference to Big Data: without endangering privacy. Of course, in those cases, companies still (need) to minimize privacy risks. Their internal processes and risk assessments must show how they comply with the guiding principles of data protection law. And—if something does go wrong—the company remains accountable.7
Further details on the privacy considerations are published in the research entitled “Big Data and the future of Privacy,8” which considers five fundamental questions regarding the role of privacy, and the measures to mitigate privacy risks as it pertains to Big Data:
a. What are the public policy implications of the collection, storage, analysis, and use of Big Data? For example, do the current U.S. policy framework and privacy proposals for protecting consumer privacy and government use of data adequately address issues raised by Big Data analytics?
b. What types of uses of Big Data could measurably improve outcomes or productivity with further government action, funding, or research?
c. What technological trends or key technologies will affect the collection, storage, analysis, and use of Big Data?
d. How should the policy frameworks or regulations for handling Big Data differ between the government and the private sector?
e. What issues are raised by the use of Big Data across jurisdictions such as the adequacy of current international laws, regulations, or norms?
7. Cryptographically enforced data-centric security: Protecting access to data has invariably involved the application of security to the systems in which the data are stored. This approach has, however, a large number of attacks that can circumvent the security and allow the attacker access to the data. An alternate approach is to use strong cryptography, which while does have threats such as covert side-channel attacks and are more difficult to carry out.
While there are challenges in using cryptography within Big Data environments, these are discussed within the BDWG research entitled “Top Ten Challenges in Cryptography for Big Data.9” It summarizes that cryptography should be seen as an enabling technology critical for the adoption of cloud computing and Big Data. This is because it provides “mathematical assurance” about the level of trust that can be attributed to the use of third parties when handing over critical/personal data to a third party.
8. Granular access control: Enforcing the need-to-know principle is an important foundation in achieving confidentiality in the data. One of the measures that can be leveraged to achieve this principle is the use of mandatory access control, with appropriately strong authentication. It should be feasible for end customers to understand the access control methodologies deployed by the cloud provider and determine if they are appropriate.
9. Granular audits: Although security monitoring will provide a feed of security events as they occur, there is a potential that an attack may have been missed. Subsequently, regular audits are an important measure to identify intrusions that may have been missed. Although this is not a new area, the scope, granularity, as well as the number of inputs will likely differ.
10. Data provenance: The provenance, in other words, the source of the data will likely be of importance. The provenance will determine the level of trust associated with the data, for example, when investigating a security incident, it may be important to know how the data were created, particularly if the security incident could end up in a court/disciplinary situation.
The Top Ten Challenges as they relate to the Big Data ecosystem are graphically depicted in Figure 8.1.
The role of Big Data in the growth of cloud computing, and in particular, the public cloud is significant; “Big Data analytics are driving rapid growth for public cloud computing vendors with revenues of the top 50 public cloud providers shooting up 47% in the fourth quarter of the last year to $6.2 billion, according to Technology Business Review Inc.10” With stories of Big Data being able to predict pregnancy of a teen even before her own father,11 and these larger data stores not only increasing the number of persons who have access to the data (and therefore increasing the risk) but also becoming more attractive to attackers; the need for security and privacy for Big Data, and particularly Big Data in cloud computing (where the level of transparency and flexibility in determining controls will be as great as those internally hosted environments), has never been so important. It is for this reason the readers are encouraged to track the research and deliverables produced by the BDWG.
image
FIGURE 8.1 Top 10 security and privacy challenges in the Big Data ecosystem.

Cloud Data Governance

With the transition to cloud computing, end customer organizations hand over the management of systems that host their data to third parties. The level of control will depend on the cloud model itself, which will vary based on the as-a-service model used. Such a transition will invariably mean that the level of transparency provided to end customers will decrease particularly in those areas that cannot be technically measured (for example, an update of antivirus software). As such the governance employed by the provider, which involves the processes, roles, and technologies for managing governing data in cloud computing environments will likely raise concerns, particularly as these are more difficult to measure in real time. It is for this reason that the Working Group entitled on Cloud data governance was established to:
▪ Understand the requirements of the various stakeholders in governing and operating data in the cloud.
▪ Provide a series of recommendations on the best practices to address the issues raised in the earlier phase.

CSA Research Deliverables

At present, the deliverables available from the working group provide the survey results from the “Cloud Consumer Advocacy Questionnaire and Information Survey Results (CCAQIS).12” The following summarize some of the key findings under version 1.0 of the survey:

Data Discovery

Does the CSP Provide a Capability to Locate and Search All of a Customer’s Data?
Yes59%
No41%

image

Location of Data

Does the CSP Allow a Customer to Select a Specific Location for Use and/or Storage of the Customer’s Data?
Yes82%
No18%

image

Data Aggregation and Inference

Does the CSP Provide Customers with Controls Over Its Data to Ensure That Data can or cannot be Aggregated According to Customer Needs and/or Restrictions?
Yes58%
No42%

image

Does the CSP Provide the Ability to Mask Data from Selected Customer Personnel, as Determined by a Customer, to Prevent Data Aggregation or Inference Problems for a Customer?
Yes65%
No35%

image

Encryption and Key Management Practices

Does the CSP Provide End-to-End Encryption for Data in Transit?
Yes84%
No16%

image

Data Backup and Recovery Schemes

Does the CSP Offer Data Backup and Recovery Services to Customers?
Yes88%
No12%

image

If Yes, is the Specific Location for Such Selectable by the Customer?
Yes53%
No47%

image

No two clouds are alike. This statement was inferred in Chapter 1, and the above is evidence of that particular statement with clear evidence as to the lack of standardization across cloud service providers in the area of governance. However, there are some trends that are of interest:
CSP’s Areas of Strength and Weakness as it Relates to Cloud Governance
Mature AreasImmature Areas

▪ Control over aggregation of data

▪ Vetting of encryption algorithms

▪ Define access to their data

▪ Technical enforcements of multitenancy

▪ Timeliness of removal of data

▪ Cryptographic key management scalable to cloud

▪ Methods for handling data remanence

▪ Data remanence and methods used to ensure data are removed

▪ Mechanisms for customers to determine which columns are encrypted and to prevent inference from nonencrypted column

image

Following the results of the survey, the second phase of the Cloud Data Governance Group can focus on efforts defining those best practice recommendations. However, what is clearly evident that with such variance among providers regarding the level of governance, the due diligence process has never been so important.

CloudCERT

In Chapter 1, there was a discussion regarding the advent of EU regulation that would class cloud computing as “critical infrastructure.” This classification is understandable with the concentration of computing resources, whereby a cloud incident can have a greater detrimental impact to multiple organizations than if the resources were internally hosted. As a result of this greater impact, which cloud computing represents, the CSA launched the CloudCERT initiative. Comprised of subject matter experts across cloud service providers, telecommunications providers, national CERTs, as well as industry representation, the mission for CloudCERT is to:

Enhance the capability of the cloud community to prepare for and respond to vulnerabilities, threats, and incidents in order to preserve trust in cloud computing.

CloudTrust Protocol

Introduced in Chapter 2, the CloudTrust Protocol (CTP) is intended to provide cloud end customers with the ability to query the security controls deployed by the provider. One of the many challenges that we have discussed throughout this book is the lack of transparency within cloud computing, the CTP intends to address this issue. In certain circumstances, particularly where regulated data are being hosted by third-party providers (such as CSPs), the end customer will be responsible for ensuring that third parties have the appropriate controls in place, and failure to ensure these places will leave the end customer (or data controller) liable for potential fines. Subsequently, utilizing certification and as such annual attestation regarding the controls deployed by the provider may not provide the level of requisite confidence.
The original developer is CSC, who in their white paper entitled “Digital Trust in the Cloud13” sees CTP as an asynchronous “question and response” protocol that is presented to all providers and is ultimately controlled by the clients themselves. This allows the end customer to query the configuration, operating status, and other key questions of the provider that the end customer is interested in identifying about the provider. The provider, when receiving the information request can decline responding, however should recognize that they will have the opportunity to respond and deliver information “in the best possible way for them.” Within the CSC publication entitled “Digital Trust in the Cloud: A précis for CTP 2.0,14” a ratio is presented to explain the number of elements of transparency a provider is able to support. This is referred to as the CloudTrust Index (CTI), which can be used by end customers to determine how transparent a provider is with regards to the security, privacy, and compliance. It is however worth noting that not every cloud implementation will require CTI of 1 (being the highest level of assurance). The level of assurance sought will be dependent on the sensitivity of data/services that are externally hosted. This is depicted in Figure 8.2;
Implementing CTP is not restricted to one particular operating model, although “the design intention has always been to create fully automated implementations of the CTP as an end-to-end RESTful Web service, such complete automation is not strictly necessary to achieve the ultimate objective, i.e., reclaiming important elements of transparency in the cloud.14” Although an automated approach may be considered the intention, alternate out-of-band communications are also available such as the use of e-mail. Even though the latter may be inefficient, it is anticipated for providers to adopt more in-band communications for CTP queries such as the publication of APIs. In terms of the elements of transparency, the second revision of CTP has 23 elements across multiple families, for example, under the family audit log, the following elements are included:
image
FIGURE 8.2 CTI used as a transparency indicator.
▪ Provide log of policy violations {in last “n” hours} (e.g., malware elimination, unauthorized access attempts, etc.)
What does this mean?
Would allow the end customer to see those events that are in violation (or indeed attempted violation) of the client policies. For example, this may be any attempt to access specific files that are hosted by the provider.
▪ Provide audit/event log {for last “n” hours}
What does this mean?
This log request asks the provider to send all log files from the date (“n” hours) back to the customer regardless of whether a violation occurred or not.
▪ Provide a list of currently authorized users/subjects and their permissions
What does this mean?
This request requests a log that will detail those entities that have authorized access to those items that are assets owned by the end customer. This will also include the permissions allocated to those entities.
▪ Provide incident declaration and response summary {for last “n” hours}
What does this mean?
Requests log data of those events that the service provider determines to be incidents as well as description of the actions taken, and the latest update on the status of the identified incident.
The above are only a snapshot of those transparency elements within version 2 of the CTP; however, with the migration of more critical services to the cloud there is no question that the CTP will play an increasingly more important role with the selection of a service provider, as well as the ongoing management of CSPs.

Enterprise Architecture Working Group

The Enterprise Architecture Working Group has produced a number of deliverables. Most recently, version 2 of the Enterprise Architecture is both a methodology and a set of tools enabling security architects, enterprise architects, and risk management professionals to leverage a common set of solutions that fulfill their common needs to be able to assess, where their internal IT and their cloud providers are in terms of security capabilities and to plan a road map to meet the security needs of their business.15
It is anticipated the architecture to be used in any number of design phases; these range from assessing opportunities for improvement, creating road maps for technology adoption, defining reusable security patterns, and during the assessment phase of potential cloud providers/security vendors against common capabilities.

Incident Management and Forensics

Even though the area of incident management is covered in detail, it is worthwhile detailing the output of the CSA working group focusing on incident management and forensics to understand the particular areas of research. As will be detailed in Chapter 9, the introduction of cloud computing adds additional complexity to the management of security incidents, not least compounded with the fact that the data are managed by a third party. The intention of the working group is to develop best practices in the management of security incidents within the cloud. The scope of the group will address the following topics (note this list is not exhaustive):
▪ Incident management (IncM) in cloud environments: This will include the life cycle of IncM, legal considerations, locations of available evidence, etc.
▪ Cloud forensics: To include CSP capabilities, mapping against ISO 27037, the process for conducting cloud forensic investigations, etc.
▪ eDiscovery
▪ Legal and technical issues related to cloud forensics: To include best practices for SLAs required for forensics support, the management of personally identifiable data, etc.
Available research from the working group includes a document entitled “Mapping the Forensic Standard ISO/IEC 27037 to Cloud Computing.16” The purpose of ISO 27037 is to establish a baseline within the sphere of digital forensics. The research maps the components of the ISO standard and considers the requirements in the context of cloud; these particular requirements focus on the areas of identification, collection, acquisition, and preservation of digital evidence. In particular, the analysis considers the complexities that end customers of cloud services face as they relate to the requirements defined by the ISO standard, for example; it will be necessary to collect (5.4.3) digital evidence such that they are “removed from their original location to a laboratory or another controlled environment for later acquisition and analysis.” However for cloud environments, this will likely prove challenging and as such is “acquisition should usually be preferred over collection to avoid impacts to parties not involved in the matter and the gathering of irrelevant information that must be excluded during analysis.” Subsequently for end customers, it is important to ascertain the level of forensic support provided by the CSP, as not all currently provide complete support. Further, as detailed within the research, the area of cloud forensics still has specific challenges that are generally easier to manage for internally provisioned services. The customer is therefore strongly encouraged to incorporate this as part of any due diligence process.

Innovation Initiative

According to the charter17 of the innovation initiative within the CSA II, its mission is:
▪ Identify specific issues relating to trust and security that would inhibit the adoption of next generation information technology
▪ Articulate the guiding principles and objectives that IT innovators must address.
▪ Help innovators incubate technology solutions that align with our principles and address the systemic gaps we have identified.
It is intended for the working group to introduce innovators into the CSA community, and refer them into the CSA II subcommittee or to external partners. This subcommittee will be comprised of capital partners and technologists, and will allow innovators to get feedback on their products/services to become actively supported. The deliverables of the initiative “may come in the form of a report on an annual basis to the CSA from the working group providing the key metrics of performance and the measurable outcomes.”

Security as a Service

The Security as a service working group focuses its research to define the term security-as-a-service as well as the various categories within this definition. In addition, there are multiple research deliverables that focus on the implantation practices for the defined security service categories.
With end customers leveraging various security-as-a-service offerings from different providers, they will ultimately lose control over not only the data, but also functionality and the operations. This naturally means that the provider will be required to provide the requisite transparency to customers; the level of transparency will be entirely dependent on the level of assurance required. The risks to an end customer using such a service that are compounded by the lack of transparency include (but not limited to): vendor lock-in, identity theft, and unauthorized access. Organizations either considering or leveraging any of the following services in the cloud should consider the guidance provided by the working group:

Category 1: Identity and Access Management (IAM)

“Identity management includes the creation, management, and removal (deletion) of a digital identity. Access management includes the authorization of access to only the data an entity needs to access to perform required duties efficiently and effectively.18” The purpose of the guidance is “to define the requirements of secure identity and access management, and the tools in use to provide IAM security in the cloud.”

Category 2: Data Loss Prevention

Data loss prevention (DLP) technologies are used to ensure that data, both in transit or at rest, adhere to the policies as defined by the organization. For example, this may be to ensure that data which contain the word CONFIDENTIAL are sent outside of the organization. This policy could also be applied to alternate storage devices, such as USB drives. The opportunity exists for end customers to utilize DLP-as-a-service whereby functionalities such as encryption, or identifying data (e.g., keyword searching) can be used to mitigate threats such as data leakage, regulatory compliance, etc.

Category 3: Web Security

Customers have the opportunity to leverage Web security as a service; this ensures that all traffic can be diverted through a cloud service for inspection before entering the enterprise, or indeed inspecting all outbound traffic. Within this category, there are additional areas of functionality such as Web filtering, where all outbound Web requests from users can be checked against the internal policy to determine if users are allowed to access the requested resources. For example, is a user allowed to check his/her social media accounts during working hours? For inbound traffic, these services can be used to ensure that incoming requests are not malicious. This, of course, is only a small snapshot of the types of services offered through Web security as a service.

Category 4: E-mail Security

Applying e-mail security controls within a cloud-based service ensures that all (inbound and outbound) e-mails into an organization are scanned for malicious content, or indeed content that deviates from policy. Some of the features include scanning e-mails to determine if they are classed as spam, phishing, contain malware, or should be encrypted as defined by the end customer policy.

Category 5: Security Assessment

The use of cloud computing to deliver security assessments has been in use for some time, with early trailblazers sitting comfortably within this category. Customers benefit from the quick setup time, pay-per-use payment models that exist (although alternate models will exist, such as subscription services), and elasticity. These services can be used to identify vulnerabilities within both hosts inside the enterprise and externally facing systems, security assessments can also be used for compliance purposes.

Category 6: Intrusion Management

A growing service category is the use of dedicated cloud providers to review relatively large data sets to identify evidence of intrusion. This can be done in-line, whereby traffic is routed through a security service provider, or indeed a hybrid deployment with sensors deployed within the end customer’s environment. These services would leverage deep packet inspection technology that would include signatures, behavioral analysis as well as other methods to identify anomalies that would indicate potential intrusions.

Category 7: Security Information and Event Management

The role of security information and event management (SIEM) systems is to collect log and event data to provide an overall view of security within a given environment. By transitioning an SIEM service to the cloud, the end customer has the opportunity to transition the management and storage of logs to the cloud, as well as the event correlation in order to identify potential intrusions into the monitored environment.

Category 8: Encryption

The role of encryption-as-a-service (EaaS) simplifies the key management process; “You throw data at it and it does all the key management and key backups. It’s all done centrally. All the user needs to know is what data to protect and who needs to be given access. People have been afraid of encryption for a very long time, so the ‘as a service’ model makes it easier for them to consume19” according to Tsion Gonen of Safenet. Key management is the only one element that falls into the EaaS category; other offerings include various Virtual Private Network (VPN) services, as well as the encryption of data both at rest and in transit.

Category 9: Business Continuity and Disaster Recovery

Recovery in the event of a disaster would seem best placed within a cloud environment. For example, many organizations would want to have a “hot site,” a fully functioning facility ready in case their primary environment fails. This could prove costly, however within a cloud based environment “a tenant could make use of low-specification guest machines to replicate applications and data to the cloud, but with the provision to quickly ramp up the CPU, RAM, etc., of these machines in a business continuity/disaster recovery scenario.20

Category 10: Network Security

The provision of network security will likely include both virtual and physical devices that demand integration to ensure that the virtual network environment has visibility of all applicable traffic. Services within this category will include not only firewall services, but also DDoS (distributed denial of service) protection, intrusion detection, and intrusion protection services.

Security Guidance for Critical Areas of Focus in Cloud Computing

Now in its third revision,21 the guidance provided by the CSA was originally published in 2009 under v1.0. It was updated later in 2009; the third and current version was published in 2011, building upon previous versions. The intention of this third version was to provide recommendations that could be measured and therefore audited. The guidance makes numerous recommendations to help reduce risk when adopting cloud computing, and should be seen as a method to determine the level of tolerance during the migration of an asset to the cloud.
The guidance is comprised of a number of domains highlighting the cloud computing areas of concern, which are divided into broad policy areas (governance) and tactical security considerations (operational). The domain areas are as follows:
1. Cloud computing architectural framework
2. Governance and enterprise risk management
3. Legal issues: Contracts and electronic discovery
4. Compliance and audit
5. Information management and data security
6. Portability and interoperability
7. Traditional security, business continuity, and disaster recovery
8. Data center operations
9. Incident response, notification, and remediation
10. Application security
11. Encryption and key management
12. Identity and access management
13. Virtualization
14. Security as a service
The “guidance contains extensive lists of security recommendations. Not all cloud deployments need every possible security and risk control.” Therefore organizations are encouraged to spend “time up front evaluating your risk tolerance, and potential exposures will provide the context you need to pick and choose the best options for your organization and deployment.21

Software Defined Perimeter

The traditional approach toward protecting an organization would leverage a perimeter that supported separation between the trusted environment and the external untrusted world. However, multiple pressures has meant that according to Paul Simmonds (speaking in 2007 as board member for the Jericho forum) said of firewalls, “In a large corporate network, it’s good as a quality-of-service boundary but not as a security service.22
To address the challenges associated with the fixed traditional perimeters, the role of software-defined perimeters (SDPs) is seen as the solution in implementing perimeters, while being afforded the flexibility of being deployed anywhere. In “its simplest form, the architecture of the SDP consists of two components: SDP hosts and SDP controllers.23” The SDP host will be able to initiate a connection, or alternatively accept a connection, all of which are managed through interaction with SDP controllers. The controller will manage the hosts that are able to communicate with one another, and whether any external authentication service will be used. The workflow is graphically depicted in Figure 8.3; this illustrates that once the hosts and controllers are brought online (and gone through the authentication phase), the controller will provide the host with a list of those hosts that it can accept connections from (accepting hosts) with additional policies (e.g., to use encryption). Once completed, the initiating SDP hosts would initiate a connection to authorized accepting hosts, as defined by the list received from the controller.
image
FIGURE 8.3 Software-Defined Perimeter (SDP) workflow. Software Defined Perimeter Working Group.23
Beyond the mechanics of the SDP, there exists a multitude of use cases where an SDP can be used to improve the security of a given environment:
▪ Enterprise application isolation: John Kindervag of Forrester wrote, “There’s an old saying in information security, we want our network to be like an M&M, with a hard crunchy outside and soft chewy center.24” This “old saying” uses the M&M to explain that once an attacker has breached the external perimeter of its intended target, it effectively has free reign over the internal environment, or the soft chewy center! To address this issue, organizations may wish to isolate those assets that are considered high value. Using SDP to isolate these high-value assets from normal assets, the organization has the opportunity to mitigate the risk of the attacker laterally moving across the entire infrastructure.
▪ SDP within cloud environments: SDP has the opportunity to be deployed across all cloud models and architectures. For example, end customers have the opportunity to use SDP in order to hide and secure all public cloud instances that are used. Alternately for Infrastructure-as-a-service environments, SDP can be used as a “protected on-ramp” for customers.
▪ Cloud-based virtual-desktop infrastructure (VDI): A VDI environment uses virtualization to present a desktop to the end user. The use of such an infrastructure could well benefit from being hosted in a cloud environment, not least because of the payment model (e.g., by the hour), but also the accessibility of the service. However, this approach could result in issues because the VDI will likely require access to internal services (e.g., file access, internal applications, etc.). In these instances, it is expected that an SDP can assist by allowing the end organization to limit access at a granular level.
▪ Internet of Things (IoT): The IoT landscape will be comprised of a multitude of devices, many that will host particularly sensitive data, and that may not be able to support the installation of security software. An SDP will allow organizations to hide and secure such services.
Such use cases clearly articulate the importance and role of SDPs in the provision of security in a world where traditional perimeters are no longer suitable. The intention of the Software-Defined Working Group will be to build up on other work within this environment the inclusion of all stakeholders. The document entitled SDP Specification v1.025 released on April 2014 defines the SDP protocol; this includes the authorization protocol, and the authentication used between hosts. Also included are the logs that will be included, which at a minimum will include the following fields:
▪ Time: When the log was created
▪ Name: Of the event
▪ Severity: Ranging from debug to critical
▪ Device address: IP address of the machine that generates the record
It is recommended that all logs to be passed to an SIEM system to provide an overall view of security within the environment.
As has been articulated throughout this chapter, and indeed the preceding chapters within the book, the CSA is an inclusive organization that allows experts to contribute to the deliverables documented within this text (as well as those that are not included). Therefore if you, the reader, have a particular interest, or indeed disagree with any of the deliverables detailed, please get involved with the appropriate Working Group, your inputs will be greatly welcomed.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset