Chapter 5

Making the Move into the Cloud

Abstract

Following two chapters considering the threats to cloud computing, we will turn our focus to the steps that end customers need to consider in order to make the move to the cloud.

A brave new world
▪ Cloud computing checklist
▪ Security for the cloud
Traditionally when organizations look to secure their data, the approach taken is a pretty conventional one. Whereupon the focus is on identifying where in the organization key systems and data are located. Once this exercise is completed, a risk assessment is subsequently done to determine what security risks are facing the organization and then selected security controls are implemented to manage those risks.
In most cases, those security controls focus on the systems and physical environment. Firewalls are placed on the network perimeters, intrusion detection systems (IDS) placed on the network, antivirus software installed on computers, and access control lists set on servers. With this model, the focus is very much on securing items such as servers, PCs, and other devices rather than focusing on the information residing on those systems. In effect, organizations mimic the security of medieval castles whereupon all access into and out of the castle is controlled and monitored with guards patrolling within the castle walls to spot any suspicious behavior.
However, just as castles have proven ineffective against many of today’s threats, in the age of cloud computing so too has the traditional security model. When working in the cloud, a large amount of the control that organizations traditionally had over their infrastructure and data is no longer there. Instead organizations rely on others to secure their data and services.
This requires a radical change in approach and mind-set to security in the cloud. Instead of implementing a security model that focuses on protecting the perimeter and whatever is within that perimeter, the security model instead must shift focus to the data and where that data may be transmitted and where it may be stored, and most importantly who (person or a process) has what level of access to it.
In essence, the mind-set has to change from employing the medieval castle model for security to one more akin to an airport’s air traffic control tower.In an airport, all traffic and activity in that airport is managed and controlled from the air traffic control tower. Airplanes cannot land or take off without using the air traffic control tower. Vehicles cannot move near the runways without the air traffic control tower being aware. When an airplane wants to land at an airport it must contact the air traffic control tower. The air traffic control tower will then guide that airplane to land at an appropriate runway. From there the airplane with guidance from the air traffic control tower taxis to a berth at the airport’s terminal. At this berth are the support services to enable passengers to disembark from the airplane, for the passenger luggage to be unloaded, cleaning crews to enter the airplane and clean it, and catering staff to refurbish supplies and food within the galleys. Once this is done, a new set of passengers is loaded onto the plane along with their baggage. The plane is guided away from the berth and helped taxi to the right runway. When the conditions are right, the air traffic control tower allows the airplane to take off. When the airplane is in the air, it is directed safely through the airspace until it reaches the airspace for which another air traffic control tower is responsible. Control is then handed over to that other air traffic control tower to allow the airplane to continue on its journey.
In cloud computing, data is analogous to airline passengers. In order to get from one destination to another it is important to ensure the right data gets to the right place at the right time and that all supporting services are available when required. By taking the air traffic control tower analogy, organizations can focus more on the items that matter, the data and services, and leave the worry of securing the premises and the infrastructure to third parties.
In order to move to this model of securing data in the cloud, organizations need to take a number of key steps in order to ensure the appropriate data and/or services are moved to the cloud.
It should be noted that this approach is also important to identify what items can be moved into the cloud and those that for various reasons, including security, should remain on premise. There are number of reasons that this may be the case.
It may be more suitable to keep highly sensitive data on premise rather than store it in the cloud. For example, it would probably be more prudent that a company’s most valuable intellectual property and/or research and development work remains on premise. However, the Customer Relationship Management system could be an ideal candidate to move to the cloud. In some specific use cases, the move to cloud may improve security, especially when company’s security controls are outdated and the investment to improving those would cost more than using more secure cloud service.
Another reason not to move certain services into the cloud may be due to those services not being secured enough already. This could be due to faulty code, inappropriate access control, or poor processes. Moving these types of services to the cloud may actually make them less secure than if they remained on premise. If they remain on premise, their attack exposure may be more reduced than in the cloud. Moving an insecure service into the cloud without taking steps to secure it could make that service more vulnerable than if it remained onsite.
For an organization to successfully migrate its systems to the cloud in a secure manner requires that organization’s understanding and knowing what it is that it will be moving into the cloud. This requires the organization to identify and classify its information assets.

Cloud Computing Checklist

The following steps should be considered when making the move to the cloud.

Identifying Information Assets

Normally when organizations think about their assets they focus on the accountants’ definition of assets. This definition tends to focus on items that hold a monetary value to the organization. In the main these tend to be physical assets such as buildings, desks, computers, printers, and, in some cases, software. This approach does not take into account the value on nontangible items such as data.
For organizations to know what data is suitable to move securely into the cloud they need to first identify what data they have, where that data is located, and finally how valuable that data is to the organization.
Organizations should identify all of the key data they employ. These data could be held in databases, in spreadsheets, on mainframe systems, or on files on a network share.
This exercise is important for organizations to complete, even more so when moving data to the cloud. It is important organizations understand what information is held in what area, not just from a security point of view, but also from a data quality point of view to prevent duplication and errors. However, when moving data into the cloud, the data identification exercise takes on even more importance to ensure that proper data is stored in the appropriate places.

Classifying the Data

Having identified what data the organization has, it then should determine how important that data is to the organization. This process is known as classifying the data or data classification. Classifying the data enables the organization to understand how critical or important that data is to the organization. For example, the database that holds all customers’ financial details would be of more value or hold more importance to the organization than the information to be published on the organization’s Website.
Anyone familiar with spy novels or movies will be familiar with data classification and its advantages. Information marked Top Secret or For Your Eyes Only obviously holds more importance than information which does not. It is quite easy to determine from those labels how important the information is and how it must be secured and treated. By classifying data, organizations can ensure they are aware of what data can be moved into the cloud and which data may be more suitable to remain on premise due to its sensitivity.
There are many ways to classify data but in the main they fall into two categories: quantitative or qualitative methods. Which method an organization decides to employ will depend on many factors such as the type of industry the organization is in, how regulated the organization’s industry sector is, how mature the organization is, and the time and budget the organization has to perform a classification exercise.
A quantitative method attempts to put a monetary value, or other numerical representation of value, the data has to the organization. While this exercise is relatively easy to conduct for physical assets such as computers or other items that have an actual value, it is not so easy to do for data assets. For example, how much value does an organization place on its customer database? How does it calculate that value? Does it determine the value based on the man-hours taken to generate the data, or the revenue that data brings into the organization, or is it a combination of these and other factors? While it may be difficult to do a thorough quantitative analysis of the data assets, the advantage it brings is the organization has a clear understanding of the actual impact to its bottom line that data has. This makes it much clearer and easier to determine how much budget should be spent in securing that data.
A qualitative approach to data classification is much more intuitive and may not involve, or rely as heavily upon numerical values applied to the data. In a qualitative approach, the data classification exercise determines the importance of data assets to the organization from a business perspective. The data is classified based on its criticality to the organization such as high, medium, or low criticality.
Whichever methodology is employed by the organization it is important that all involved in the data classification exercise understand the methodology used and that the results from each exercise can be consistently repeated. This is important in order to enable the organization to properly manage its data in and out of the cloud throughout the data’s lifecycle.
image
FIGURE 5.1 Information lifecycle.
Information has an inherent lifecycle. It is firstly created, and then it is processed and stored, until ultimately it is no longer needed and destroyed (Figure 5.1).
It is important to also note that as data moves through its lifecycle its classification can change. Some data that today is very sensitive may tomorrow be public information. An example of this could be data relating to an organization’s stock valuation. Prior to releasing its annual report and other financial details such information would be highly sensitive and must be kept confidential as that information could be used to unfairly manipulate the organization’s stock price. Once the annual report and financial information has been released then it is in the public domain and no longer needs to be treated as confidential.
Having identified and classified the data, organization can now determine what security controls it needs to implement to protect that data asset. For example, if the value of the data assets is $US25,000 then it makes good business sense to spend US$1000 to protect it. However, it does not make such good business sense to spend US$100,000 to protect the same data asset. This of course is a simplistic example as the model will need to add factors such as likelihood into any decision.
How to determine what security controls to put in place and how effective those controls will be on protecting the data asset depends on how effective the risk assessment and management process within an organization is. As data moves in and out of the cloud, it is essential that an organization regularly runs risk assessments to ensure that all risks facing that data asset are properly identified and managed.

Risk Management

Risk Management is the process by which an organization identifies the key security risks that its data assets are facing and the security controls that need to be put in place to reduce and manage those risks. Before engaging with a cloud service provider, organizations should conduct a risk assessment to ensure that the controls that will be in place by both the cloud service provider and the organization itself are effective in managing those risks. If that risk assessment determines the controls are effective in managing the level of risk, then the organization should proceed with the engagement. If that risk assessment, however, determines that the controls in place are not effective enough, then the organization needs to make the decision to put in place more controls to reduce the level of risk, to accept the risks and still engage with the cloud service provider, to engage with another cloud service provider, or to avoid engaging with a cloud service provider and run the service in-house.
Effective risk management requires that the risk assessment exercise is not run as an isolated event at the start of the engagement with the cloud service provider. Instead, the risk management process is a continuous one whereby regular risk assessments should be conducted to ensure the correct levels of controls are in place to protect the data according to its classification. An effective risk management process also helps organizations manage risks over time as the threat, technical, and business landscapes change over time. It is also important to ensure that the risk assessment process provides consistent results each time it is conducted, even when conducted by different people in the organization.
Before conducting a risk assessment it is important that an organization understands what is meant by a risk. At its highest definition, risk is a definable event that has a probability of occurrence and takes into account the impact of that event happening. In essence the risk is materialized when the identified problem or event actually happens. A simple analogy to explain this would be as follows: A weak lock on the front door of a house places the house at a risk of being burgled. However, the weak lock by itself does not guarantee the house will be burgled. Other factors come into play, such as the location of the house. If located in the middle of a forest many miles away from anyone, then the likelihood of a burglar coming across the house is much less than if the house was located in the middle of a city with a high crime rate.
It should be noted that in life risk can have a positive and also a negative outcome. Whenever a business invests money in a new marketing campaign or looks to invest in a new product, there is the risk that investment may not provide any returns. However, that investment could pay lots of dividends depending on how successful the marketing campaign or new product is. An example of this would be the Apple iPhone. There was a risk that Apple could have lost all the money it invested in the research and development it spent in producing the iPhone. As it turns out the risk of investing in the research and development of the iPhone proved to be very positive. In information security risk is often looked upon with a negative impact.
There are many different definitions of risk and in the context of information security there are a number of appropriate definitions. One definition of risk states that

Risk is the likelihood of the occurrence of a vulnerability multiplied by the value of the information asset minus the percentage of risk mitigated by current controls plus the uncertainty of current knowledge of the vulnerability.1

According to the ISO 31000:2009 —Principles and Guidelines2 on Implementation, risk can be defined as the “effect of uncertainty on objectives” which translated means that risks are events that can have a negative or positive effect on the organization’s objectives.
The NIST SP 800-30 Risk Management Guide for Information Technology Systems states that essentially in its simplest interpretation risk can be demonstrated as follows:

Risk=Threat×Likelihood×Impact

image

Effective risk management is where controls are introduced to either reduce the likelihood of the threat being realized or reduce the impact of the risk. Organizations need to acknowledge that risk cannot be fully eliminated. The purpose of risk management and risk assessments is to identify the risks and appropriate controls that will help reduce the likelihood or the impact of the risk.
Once a risk assessment to move to a Cloud Service Provider has been completed, the organization needs to look at how it will manage those risks on a continual basis. This is the discipline of risk management and is where a formal process is implemented in an organization to ensure that all risks are identified and managed. A risk register should be maintained of all the identified risks logged and recorded. A risk treatment plan should then be developed outlining the controls that may already be in place, the tasks required to implement any additional controls, and who will be responsible for ensuring each of those tasks are completed.
Having identified the risks, documented them in the risk register, and developing a risk treatment plan, the organization can then determine how best to manage the risks. A number of options exist when managing risks.
Risk mitigation: Having identified the risks, risk mitigation is one wherein additional controls can be implemented to reduce the likelihood or impact of the risk. These controls may be technical controls or may involve developing new processes and procedures, or providing additional training to staff.
Risk acceptance: From time to time there may be risks that the organization has no control over the likelihood of it happening or reducing the impact. There may already be certain controls in place and no additional controls may be available, or indeed there may be no cost benefit to implementing any additional controls. At this stage, the organization may decide the business benefits may outweigh the effort to further mitigate the risk and decide to accept the risk and any consequences should it materialize.
Risk avoidance: Despite all efforts by an organization, it may discover that it cannot reduce the level of risk to an acceptable level. In this case, the organization may have no choice to avoid the risk. In the cloud computing context, this would be where an organization would decide through its risk assessment process that the level of risk in moving to the cloud for a particular service is too high and therefore will stop the project or implement the solution in-house.
Risk transfer: In essence risk transfer is where an organization decides to outsource the management of certain risks to a third party. In some cases this could be seeking cyber insurance3 in the event of certain risks materializing. Alternatively it would be engaging with a third party to outsource the task or function so that third party is responsible for managing the risks. This is one of the big advantages Cloud Service Providers have over many organizations. In many cases Cloud Service Providers will have better physical and IT security in their datacenters than their customers thereby helping customers manage the risks by transferring them to the Cloud Service Provider. It should be noted though that while organizations can transfer the management of the risk to a third party, the responsibility for that risk still lies in the organization.
Risk deference: Similar to risk avoidance, deferring risks is where an organization may decide not to engage in a certain activity, e.g., migrate to the cloud, due to concerns over the effectiveness of security controls. However, instead of canceling the activity entirely risk deference is where that activity is postponed until more effective controls are in place or an alternative solution can be found.
In order to determine ways of managing the identified risks, organizations need to manage the risks. The first step in the risk management process is to conduct a comprehensive risk analysis.

Risk Analysis

An organization can employ a number of risk management and analysis methodologies to identify and manage risks. It is important to note that there are no right or wrong methodologies that an organization can use, providing the methodology employed is one that can readily and easily identify the risks facing that organization based on its business and cultural needs. The European Network and Information Security Agency’s (ENISA) whitepaper on Cloud Computing Security Risk Assessment4 provides a number of examples of different types of organizations looking to engage in cloud computing and how risk assessments were achieved for each of them.
The main risk methodologies used in information security and which are applicable to migrating to the cloud are
▪ Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE by Carnegie Mellon University, Software Engineering Institute (SEI)).5
▪ CCTA Risk Analysis and Management Method (CRAMM) from Insight Consulting.6
▪ National Institute for Standards and Technology (NIST) Risk Management Framework from the NIST.7
▪ ISO/IEC 27005:2011 Information technology —Security techniques—Information security risk management.8
A number of other risk assessment methodologies may be more suitable for certain organizations. The ENISA maintains a very useful list of the various risk methodologies that are available and compares them on its Inventory of Risk Management/Risk Assessment Methods and Tools9 webpage.
Similar to data classification there are two main approaches on how to conduct a risk analysis: quantitative risk analysis and qualitative risk analysis. As with data classification which method to use is entirely down to the needs of the individual organization.
Quantitative Risk Analysis: The quantitative risk analysis approach attempts to assign real numbers to the costs of safeguards and the amount of damage that can take place should the risk materialize. This approach assigns value to information assets, be they tangible or intangible assets and then estimates the potential loss per risk. Using this approach can provide real figures which can be easily translated into business speak for senior management. However, for this approach to be successful requires a lot of data to be gathered and analyzed. It also results in hard figures being assigned to intangible assets (such as data) which may lead to false assumptions.
Qualitative Risk Analysis: The qualitative risk analysis approach judges an organization’s risk to threats, which is based on judgment, intuition, and experience versus assigning real numbers to possible risks and their potential loss. This approach generally is easier to conduct and understand. It also does not require huge amounts of information. However, as it does not employ the rigor and discipline of assigning actual figures to assets and their potential losses, it can be hard to provide consistent and accurate results.
When selecting an approach it is important to ensure that it suits the organization’s business needs and that it can be conducted regularly and consistently so that the risk management process remains effective.
Once the risks have been identified then the appropriate controls to manage the risks can be implemented. As part of the overall risk management process regular risk assessments should be completed, ideally at least on an annual basis. This process should ensure that existing controls are working as expected and that any new risks or changes to existing risks are identified and catered for (Figure 5.2).
Should there be any major changes in the technical infrastructure or business needs of the organization, a risk assessment should be completed. Similarly, should any major changes impact the Cloud Service Provider, a risk assessment should be completed to ensure all risks are maintained in accordance with the requirements of the organization.
The risk assessment process is the most critical step when migrating to the cloud. Many organizations view the cloud as a panacea to some of their internal security or indeed IT provisioning or operational issues. The assumption that many organizations have is that by migrating services and data to a Cloud Service Provider(s) who is totally focused on securing and providing that service, then logically the organization’s data and/or services will also be more secure. This is not necessarily so. While it is true that because of their business model many Cloud Service Providers will invest much more time, resources, and money into securing their offerings, many security issues may not lie within the purview of the Cloud Service Provider.
image
FIGURE 5.2 Risk management cycle.
If the organization has poor security processes and procedures, these will not be magically resolved by migrating to the cloud. If the application being migrated to the cloud has many security bugs and vulnerabilities, then unless the organization tackles those bugs and vulnerabilities as part of its migration to the cloud, the application will just be as insecure in the cloud as it was when onsite.
Organizations that establish and maintain a comprehensive risk management process with regard to the cloud will find they will gain many benefits and advantages from using the cloud.

Security for the Cloud

Having identified the assets the organization wants to move to the cloud and having completed an appropriate risk assessment, organizations can now look to confidently migrate to a Cloud Service Provider. Depending on the type of platform the Cloud Service Provider offers will determine the security controls that can be implemented and also the amount of control the organization will have over those security controls. The main platforms to consider are
▪ Infrastructure as a Service—IaaS
▪ Platform as a Service—PaaS
▪ Software as a Service—SaaS.
The key differences in these platforms is outlined in the “Security Guidance for Critical Areas of Focus in Cloud Computer v3.0”10 from the Cloud Security Alliance:

In SaaS environments the security controls and their scope are negotiated into the contracts for service; service levels, privacy, and compliance are all issues to be dealt with legally in contracts. In an IaaS offering, while the responsibility for securing the underlying infrastructure and abstraction layers belongs to the provider, the remainder of the stack is the consumer’s responsibility. PaaS offers a balance somewhere in between; where securing the platform falls onto the provider, but both securing the applications developed against the platform and developing them securely, belong to the consumer.

Organizations need to be aware of the differences between each of the platforms in order to ensure that the most appropriate and effective security controls are implemented.
In addition to understanding the different type of platforms that determine the type of security controls to deploy in the cloud, organizations need to appreciate the requirements that each of the cloud deployment models may require. In the main there are four different types of cloud deployment models. They are best described from a cloud security point of view in the “Security Guidance for Critical Areas of Focus in Cloud Computer v3.0” by the Cloud Security Alliance.
▪ Public Cloud: The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
▪ Private Cloud: The cloud infrastructure is operated solely for a single organization. It may be managed by the organization or by a third party and may be located on premise or off-premise.
▪ Community Cloud: The cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, or compliance considerations). It may be managed by the organizations or by a third party and may be located on premise or off-premise.
▪ Hybrid Cloud: The cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds).

Security Controls for the Cloud

When looking to implement security controls in the cloud, organizations need to consider a number of different types of controls. There are also a wide range of guidance documents that provide details on how to implement security controls in the cloud. The main reference documents in this area are
▪ The “Security Guidance for Critical Areas of Focus in Cloud Computer v3.0”12 from the Cloud Security Alliance;
▪ The Federal Risk and Authorization Management Program (FedRAMP)13 guidance published by the US government;
▪ The “Guidelines on Security and Privacy in Public Cloud Computing” published by the US National Institute of Standards and Technology (NIST)14; and
▪ “Procure Secure—A Guide to Monitoring of Security Service Levels in Cloud Contracts” published by the ENISA.15
Each of the above publications has a number of sections with corresponding recommendation to implement in order to secure cloud services. Please refer to Chapter 6 for further details on the certifications frameworks.

Security Guidance for Critical Areas of Focus in Cloud Computer v3.016

The “Security Guidance for Critical Areas of Focus in Cloud Computer v3.0”17 from the Cloud Security Alliance has 14 security domains where it discusses various security considerations regarding cloud security. (Please refer to Chapter 8 for more detail.)

Federal Risk and Authorization Management Program18

The FedRAMP19 guidance published by the US government for use by government agencies, but equally applicable to private companies, also outlines a number of areas that should be considered when securing the cloud. Please refer to Chapter 6 for further detail.

Guidelines on Security and Privacy in Public Cloud Computing20

NIST’s “Guidelines on Security and Privacy in Public Cloud Computing”21 focuses on public cloud services and how best to secure them. Within its guidelines NIST looks at the following areas as being key to securing cloud services:
▪ Governance;
▪ Compliance;
▪ Trust;
▪ Architecture;
▪ Identify and access management;
▪ Software isolation;
▪ Data protection;
▪ Availability; and
▪ Incident response.

Procure Secure22

The main focus of ENISA’s “Procure Secure—A Guide to Monitoring of Security Service Levels in Cloud Contracts”23 is on how organizations should use service-level agreements (SLAs) to ensure Cloud Service Providers deliver the level of security required by the organization. It should be noted that while negotiating SLAs is best practice, this may not be possible in all cases. In some situations the Cloud Service Provider may only provide a certain service and for its own efficiency of operations or cost will not alter its SLA for individual customers. In other cases the organization may be too small, or not have the legal expertise available, to negotiate an SLA, particularly with larger Cloud Service Providers. However, as organizations will not have direct control over many of the security controls required to maintain security of their data and/or services, an effective SLA can be a major tool in securing the cloud. The areas covered by this guide are
▪ Service availability;
▪ Incident response;
▪ Service elasticity and load tolerance;
▪ Data lifecycle management;
▪ Technical compliance and vulnerability management;
▪ Change management;
▪ Data isolation; and
▪ Log management and forensics.
Each of the above publications provides valuable recommendations on how an organization can secure the cloud. While each publication has different categories and areas of focus there are a number of common controls throughout each publication. These can be categorized into the following controls areas:
▪ Governance and compliance controls;
▪ Policies and procedures controls;
▪ Physical security;
▪ Technical controls; and
▪ Personnel controls.
It should be noted that some controls may be applicable to more than one category but for ease of reading the category most appropriate will be used below.

Governance and Compliance Controls

▪ Cloud Governance Frameworks
Organizations should ensure when engaging with a Cloud Service Provider that the Cloud Service Provider employs a cloud governance framework. By employing a cloud governance framework the Cloud Service Provider will demonstrate it takes its commitments to security seriously and has adopted industry-recognized best practices. An example would be the “Best Practices for Governing and Operating Data and Information in the Cloud” which is part of the Cloud Security Alliance’s Cloud Data Governance Project.24
▪ Compliance
One of the challenges faced by organizations with cloud computing is ensuring they are compliant with various legal, industry, customer, and regulatory requirements. Organizations based in the European Union who process personal data of individuals have to comply with the European Union’s Data Protection Directive25 while organizations in the United States that process personal medical records have to comply with the Health Insurance Portability and Accountability Act.26 Other compliance requirements including the Payment Card Industry Data Security Standard dictate certain security requirements that organizations must comply with should they process any credit card information. Organizations that have compliance requirements will need to ensure that the Cloud Service Provider has the appropriate controls in place to ensure the organization can remain in compliance.
An example would be organizations that are obliged to comply with the European Union’s Data Protection Directive. Under that directive it is illegal to export personal data outside the European Economic Area27 unless it is to approved countries with similar privacy laws to the EU, while to the US it is to companies that sign up to the US Safe Harbor28 Framework, or are contractually obliged29 to protect the data in accordance with the EU Data Protection Directive’s requirements. Given the nature of the cloud it can be difficult to determine exactly where data resides. It could be on a number of servers, over a number of datacenters, located in various locations around the world. The Cloud Service Provider will need to demonstrate to these organizations that their data will stored, processed, and deleted in accordance with their Data Protection obligations.
A Cloud Service Provider that has a full time compliance office, or officer, would demonstrate they take this issue seriously. Organizations engaging with a Cloud Service Provider should request details on the Cloud Service Provider’s compliance function such as who is responsible in the provider for compliance, what regulations and requirements the provider complies with, and also whether or not the Cloud Service Provider has a compliance policy in place.
It should be noted that an organization is still responsible for all of its compliance requirements even when the data and/or services are provided by a Cloud Service Provider.
▪ Third-Party Assurances
Many suppliers, be they Cloud Service Providers or traditional IT suppliers, will assert that they provide good service and they take security seriously. While many are sincere in these proclamations, it is akin to buying a second-hand car and taking the word of the sales person that everything is okay with the car. When buying a second-hand car it is recommended to take it for a test drive and to have a trained mechanic examine it for any potential problems. Similarly when engaging with a Cloud Service Provider, an organization should consider whether or not it should take the provider at face value with regard to their assurances regarding security. Ideally, the organization should seek some independent third-party assurances as to how effective the security controls are within the Cloud Service Provider.
The ISO 27001 Information Security standard is a well-recognized international standard which is independent, vendor neutral, and covers many aspects of security. Organizations that are certified to the standard demonstrate that they have implemented the security controls within the standard that are applicable to them and that these controls have been independently verified by a trusted third party. Further details are included in Chapter 6.
Another initiative that can be used is the Cloud Security Alliance’s Security, Trust & Assurance Registry (STAR)30 initiative. The Cloud Security Alliance’s STAR was launched in 2011 with the aim of improving transparency and assurance of Cloud Service Providers. Further details are included in Chapters 2 and 6.
▪ Data Ownership
It may seem strange to have to bring this item to the fore but it is important to ensure that when an organization moves its data into the cloud that it is clearly understood who owns the data that is migrated into the system and, just as importantly, data that is created within the cloud.
There should be no ambiguity over the ownership of the data. In order to ensure the organization meets its compliance requirements, it is essential ownership of the data is clearly understood. Should the Cloud Service Provider claim that any data held within their platform belongs to them and they can do with it what they wish, this could place the customer organization in breach of its compliance requirements.
The issue of data ownership needs to be defined in the event the customer organization decides not to engage with the Cloud Service Provider and move the service back in-house or to another provider. The customer organization will want to ensure that should they take this route the initial Cloud Service Provider does not claim ownership to the customer organization’s data.
So before engaging with a Cloud Service Provider an organization must clearly agree with the provider who actually owns the data.
▪ Legal Interception, Court Orders, or Government Surveillance
Ever since recent revelations by Edward Snowden relating to government surveillance of Internet companies and Cloud Service Providers, the issue of government access to private data has come to the fore for many organizations. In particular, organizations that are located in one jurisdiction may have concerns whether the government from another jurisdiction can access the organizations’ data because it engaged with a Cloud Service Provider that is located in that foreign jurisdiction. This issue was recently demonstrated when Microsoft were ordered by a US court to surrender email data belonging to one of its customers, even though the data was stored on a server physically located in Dublin, Ireland.31 This has raised many concerns for some organizations as to whether or not they should store sensitive data with a Cloud Service Provider, particularly a Cloud Service Provider that is subject to court orders from a different jurisdiction.
Organizations that are considering storing highly confidential information, be they private companies with commercial or intellectual data or government bodies with sensitive information, should seek assurances from the Cloud Service Provider as to what their policy is regarding requests from government bodies or law enforcement agencies. Questions to ask include
▪ Will the Cloud Service Provider respond to all requests without question?
▪ Will the Cloud Service Provider respond only to legal court requests?
▪ Will the Cloud Service Provider notify the organization of any requests it received relating to the organization’s data?
▪ Does the Cloud Service Provider provide access to customer data for intelligence agencies? If so, under what conditions?
▪ Under which jurisdiction and courts is the Cloud Service Provider bound?
▪ Does the Cloud Service Provider publish a transparency report outlining how many requests for data it has received from governments and law enforcement agencies?
Supply chain security: Many Cloud Service Providers rely on third parties to help them provide their services. These services could range from customer call center services, to technical support, to cleaning companies, to utility suppliers such as water and power, and contractor staff. Organizations engaging with a Cloud Service Provider should determine what other third party the Cloud Service Provider employs and what the security controls, protocols, and assurances that are in place with those providers.
Organizations should note that if they have any compliance requirements, in most cases those requirements not only extend to the Cloud Service Provider(s) they engage with, but also to any third parties the Cloud Service Provider uses to provide its service to the customer. Under many compliance regimes, the customer organization will retain responsibility for ensuring that the entire supply chain is compliant with the relevant regulations. Organizations therefore should ensure that the Cloud Service Provider provides full transparency with regard to its own suppliers and the security controls those suppliers have in place.
Security testing and auditing: While assurances from a Cloud Service Provider or from independent third parties can provide an organization with a certain level of confidence in the security of a Cloud Service Provider, there may be times the organization would like to verify for itself the claims being made. Traditionally in many cases this would involve allowing the organization to conduct an audit of the suppliers systems, premises, and/or services. The organization would arrange for its own internal audit team or engage with a trusted external provider to conduct an audit of the supplier.
In the traditional procurement and engagement model this approach worked in most cases, however when it comes to the cloud this model breaks down. Given that an organization’s data may be located anywhere in the cloud at any time it will be extremely difficult for an auditor to conduct an audit relating to the physical location of the data. Many cloud providers have developed their own proprietary platforms and systems which many auditors will not be familiar with. Finally, many Cloud Service Providers simply do not have the manpower to facilitate every request from a potential or existing customer to audit their facilities and systems.
The issue of penetration and vulnerability tests is also an issue. A Cloud Service Provider may not wish to allow customers to perform penetration tests against their systems in the event it causes availability or other issues with the provider’s services and impacts other customers. There may be legal and liability issues that could impact on the customer organization should a penetration or vulnerability test cause issues. This could extend to where customers may not wish to perform any penetration or vulnerability tests against the Cloud Service Provider’s own services but simply perform such tests against their own applications. However, performing application penetration tests or application vulnerability tests may breach the Cloud Service Provider’s terms and conditions.
If an organization cannot get agreement from the Cloud Service Provider for it to perform security tests or audits, it should seek agreement from the Cloud Service Provider that it will provide the organization with access to any penetration tests or audits the Cloud Service Provider engages with. While not as independent as engaging their own preferred testers and auditors, this option could help the organization determine how secure the Cloud Service Provider is.
Service-level agreements: In the world of Cloud Computing the selection, implementation, support, and ongoing management of security controls are under the control of the Cloud Service Provider and not the customer organization. The only influence and oversight the organization will have will be via the Cloud Service Provider’s SLA. It is therefore vitally important that organizations spend time and energy in ensuring the SLA is suitable to their requirements and provides them with the tools and ability to manage the security of the data and services entrusted to the Cloud Service Provider.
The ENISA provides a very comprehensive guide on how to establish and manage an SLA with a Cloud Service Provider. This is detailed in the “Procure Secure: A guide to monitoring of security service levels in cloud contracts”32 and should be referred to by any organization looking to engage with a Cloud Service Provider.
An effective SLA will provide an organization with continuous feedback on the effectiveness of the security controls being provided by the Cloud Service Provider. An effective SLA should also enable the organization to seek recompense or service credits in the event the Cloud Service Provider does not meet the goals and targets agreed in the SLA. An effective SLA can be a powerful tool in ensuring a provider continues to meet the levels of service required by the customer organization.

Policies and Procedures Controls

Processes and procedures ensure a structured approach is taken when dealing with certain tasks or practices. This is even more important when engaging with a Cloud Service Provider to ensure the security of an organization’s data is not undermined or compromised by provider staff, or indeed the organization’s staff, not following correct procedures. There are a number of key policies and procedures an organization should ensure the Cloud Service Provider has in place when engaging with that supplier.
▪ Privacy Policies
Privacy policies are important as they demonstrate to others what the company’s approach is to privacy and how the company will protect the privacy of individuals. Some countries have very strict privacy regulations, such as those within the European Union, Switzerland, and Iceland, which require companies operating from them or selling to customers in them to take strict measures to ensure privacy of personal information.
When engaging with a Cloud Service Provider an organization should ensure that it first has its own privacy policy in place and then ensures that the Cloud Service Provider’s privacy policy is in line with that of the customer organization.
In addition to the above the customer organization should determine what approach the Cloud Service Provider takes to building privacy controls into its services, otherwise known as Privacy by Design.33
The Cloud Service Provider should also have a policy of conducting Privacy Impact Assessments when it develops new services and alters or decommissions existing ones. The United Kingdom’s Information Commissioner’s Office provides a “Conducting Privacy Impact Assessments Code of Practice”34 which is an excellent guide on this topic.

Change Management

All IT environments grow and change over time. New network components will be added, existing components will be upgraded or replaced, and software levels on components, services, and applications will be revised and updated. Likewise a Cloud Service Provider’s environment will grow and change. It is essential that assurances are got from the Cloud Service Provider that any change to the provider’s production environments are managed in a structured and controlled way to ensure minimal disruption to service.
When engaging with a Cloud Service Provider an organization should ensure it has visibility of the provider’s Change Management Policy and get assurances that this policy can
▪ reduce the risk associated with unplanned changes;
▪ inform affected parties, such as the customer organization, of a planned change so that they may take appropriate action;
▪ minimize the effect a planned change may have on the quality or availability of services and/or data;
▪ minimize the overall cost and time associated with planned changes;
▪ provide an auditable trail for compliance, troubleshooting, and review purposes;
▪ facilitate continuous learning and improvement of personnel, processes, and procedures; and
▪ provide metrics for management decisions.
In addition the customer organization should ensure its own Change Management Policies are robust and are adapted to engage with the Cloud Service Provider. In particular the policy should ensure that
▪ changes made on the customer organization’s infrastructure are assessed to ensure they do not impact on how the organization accesses the services provided by the Cloud Service Provider and
▪ any changes made on either the customer or the Cloud Service Provider’s systems are assessed to ensure any coordinated changes required on both sides are completed in a timely and appropriate manner.

Patch Management

Patch management is the discipline of ensuring fixes to software bugs, otherwise known as patches, are applied in a timely manner while maintaining the service being provided. Applying patches in a timely and process-driven manner is important as
▪ critical bugs could cause a failure in the underlying infrastructure resulting in a prolonged outage for the cloud service or any dependent services within the customer organization’s environment;
▪ without a formalized patch management policy it is possible that applying a patch to one element of the Cloud Service Provider’s platform could have negative consequences for a system or other element that depends on the patched element; and
▪ critical bugs in the underlying database, services, or platform could be exploited by individuals to gain unauthorized access to sensitive data.
When engaging with a Cloud Service Provider the customer organization should make sure it is aware of what the provider’s patch management policy is. In some cloud platforms, e.g., SaaS, the impact of applying a patch may have little impact on the service being provided. However, should the customer organization be integrating their own systems with the SaaS platform then the application of a patch could disrupt that interoperability. Similarly changes to a PaaS or an IaaS platform could impact negatively the services subscribed to by the customer organization.
When examining the Cloud Service Provider’s patch management policies, the customer organization needs to ensure that all patches are managed in a structured manner. It is also important that the provider’s patch management policy is integrated with its Change Management Policy.
The key elements an organization should look to be included in the Cloud Service Provider’s Patch Management Policy are
▪ How often patches are applied?
▪ How the provider will manage emergency or critical patches?
▪ That the provider has outlined the level of testing that is required before applying patches
▪ Who within the provider authorizes the application of the patches, and will the customer organization have any input into this thought process?
▪ How does the Cloud Service Provider ensure patches are centrally controlled, distributed, and applied?
▪ The policy should also provide clarification as to roles and responsibilities for applying key patches and updates to the various systems and platforms within the service provider and where the demarcation lies for patches within the customer’s systems.

Incident Response Plan

Computer security incidents are a matter of course for every organization, even more so for Cloud Service Providers given the large number of clients they have which in turn could make them a bigger and juicier target for criminals. While the Cloud Service Provider will provide many assurances that they have excellent security controls in place, it is important to recognize that there is no such thing as 100% security and that at some stage there may be a security incident.
As the party responsible for all its data, the customer organization should satisfy itself that the Cloud Service Provider has appropriate incident response plans and processes in place. It should also ensure that roles and responsibilities regarding dealing with security breaches are clearly identified, agreed, and assigned between the provider and the customer organization.

Business Continuity Plan

There are two broad aspects to business continuity: one is having the countermeasures in place to prevent a disaster happening in the first place, and the other is having the countermeasures and plans in place to minimize the effects if a disaster does occur.
Organizations migrating to the cloud should realize that simply because the data or service is hosted in the cloud, it is not a license for them to forget about business continuity. The organization is still responsible for ensuring its business can continue in the event of any interruption, be they to their own in-house systems or that of the Cloud Service Provider.
As such, the customer organization should seek reassurances that the Cloud Service Provider has a comprehensive business continuity plan in place, and the customer organization integrates that plan into its own business continuity plans. The key areas the customer organization should look at include
▪ Has the provider identified the business processes critical to the continued provision of its services?
▪ What is the priority of restoring services for the specific customer, i.e., is the cloud provider restoring the largest customers first and then small ones?
▪ Has the provider conducted a detailed Business Impact Analysis (BIA)?
▪ Has the provider conducted a detailed risk assessment upon which to formulate its Business Continuity Plan?
▪ Has the provider identified the staffing requirements it needs to support the provision of critical services in the event of an interruption to the business?
▪ What solutions has the provider implemented to restore its services in a timely manner and to minimize interruption to the customer organization’s business processes?
▪ Has the provider identified and provisioned the facilities required to support the continuation of critical services in the event of an interruption to the business?
▪ What are the provider’s processes and procedures for invoking the Business Continuity Plan?
▪ What notifications will be provided to the organization in the event the Business Continuity Plan is invoked?
As well as ensuring the Cloud Service Provider’s business continuity policies, processes, and procedures are appropriate, it is equally important the customer organization revises its own plans and adapts them to the change in service delivery model. In many cases moving to the cloud can enhance business continuity for the client organization but this should not be taken for granted. The organization should review its plans to ensure the business can continue to operate should there be a business interrupting event either at their own facilities or those of the Cloud Service Provider.

Access Control

Ensuring only authorized personnel have access to the data and services stored in the cloud is another key challenge that organizations need to address. Once the data has migrated to the cloud, any authorized person with access to the Internet can theoretically gain access to that data.
It is important the customer organization has its own processes to ensure only authorized personnel have access to the cloud service based upon its security and business requirements. The organization should work with the Cloud Service Provider to ensure access to the service is provided in a manner which will protect the confidentiality and integrity of that information. This could be based on two-factor authentication solutions, restricting access to certain IP addresses associated with the organization, and/or restricting logins during certain times and from specific regions.
The organization should regularly review the access control rights to the service for users and groups of users to ensure that all access rights are appropriate for the role of the individual users.
The organization should also ensure that administrator access to the cloud service is limited to only those members of staff with a valid business requirement for such access. It should also ensure that other staff, such as developers and other application personnel, do not have administrator access to the service, except in emergencies and then with appropriate authorization.
As well as ensuring it manages the access to the service of its own staff, the customer organization should seek assurances from the Cloud Service Provider that appropriate access controls are in place regarding the provider’s staff.

Forensics and eDiscovery

Computer forensics and eDiscovery are relatively mature disciplines within traditional IT environments.35 However, when it comes to cloud computing these disciplines are still in their infancy. Cloud computing brings a number of challenges when trying to forensically capture data. Firstly there is the issue of where the data is stored and located, and how can the data be gathered in a forensically sound way. There is also the issue of the dynamic nature of the cloud and how to soundly capture threats, processes, and memory to support an investigation. In a cloud environment there is also the challenge of how to isolate logs and other critical supporting evidence for one customer’s instance from all of the other customers using that Cloud Service Provider.
When engaging with a Cloud Service Provider the customer organization should ensure it fully understands what the Cloud Service Provider can, and just as importantly cannot, provide with regard to computer forensics and eDiscovery requests. With that information the customer organization should review its own computer forensics and eDiscovery processes and procedures and adapt them accordingly.
The Cloud Security Alliance’s research group on Incident Management and Forensics36 is looking to developing guidelines on Best Practices for Incident Handling and Forensics in a Cloud Environment.

Data Migration

Migrating data into a Cloud Service Provider’s environment can be a timely task. Data may have to be reformatted or restructured to fit in with the architecture of the Cloud Service Provider. However, once this has been completed many organizations enjoy the benefits of managing and processing their data using the power of the cloud. Customer organizations should ensure though that when they first engage with a Cloud Service Provider that they clearly understand and agree how their data can be migrated away from the provider in the extent that provider closes business, is taken over by another service provider, or should the customer organization decide to engage with a competitor providing a similar service. It is important that the customer organization takes these steps to ensure it does not get “locked in” to the service provider simply because they cannot retrieve their data in a timely and secure manner. The customer organization should familiarize itself, and be satisfied, with the data migration policy of the Cloud Service Provider. A key thing the customer organization should consider is what format the data will take should it decide to migrate away from a service provider. Will their data be returned as a flat text file, a CSV file, or in a structured file format? Each of these formats could have implications for how easy it is to migrate the data to another platform. In addition the customer organization should ensure the Cloud Service Provider securely erases all data that is no longer required to be stored with that provider.

Physical Security Controls

For most customer organizations, migrating their data or services to the cloud will result in that data being stored in facilities that have better physical security than many of those organizations can provide on their own premises. However, this is something that customer organizations should not take for granted and when engaging with a Cloud Service Provider details of how the organization’s data will secured should be throughly reviewed and accessed to ensure the controls meet the customer’s requirements.
This should include the physical perimeter of the Cloud Service Provider’s premises where controls are in place to prevent access by unauthorized personnel. These controls should be designed to prevent unauthorized access, damage, or interference to the services provided from that facility. Monitoring of these controls should be in place such as the use of CCTV cameras, IDS, fire detection and suppression systems, logging at all entry and exit points, and the use of security guards.
The customer organization should determine that the Cloud Service Provider has appropriate controls in place to protect against environmental issues such as fire, floods, hurricanes, earthquakes, civil unrest, or other similar threats that could disrupt services.
Other physical controls should include protection against interruption to key services such as Internet access to the data centers, power, water, humidity, heat, rodent infestation, and other such threats. There should be controls in place to not just prevent these threats from being realized but also to minimize their impact should they occur.

Technical Controls

Technical controls are key to protecting data in the cloud. It is important to note that many of the technical controls for the cloud are the same as those used in traditional IT environments. This is because even though the cloud is a relatively new evolution of how data is managed, stored, and processed, the threats that face traditional systems, such as viruses, hacking, spam, are as relevant to the cloud.
Different implementations of technical controls will provide different levels of effectiveness. Also some providers may employ alternative controls in place of those expected by the customer organization. When engaging with a Cloud Service Provider customer organizations should use their risk assessment to ensure the controls provided by the Cloud Service Provider are adequate for the customer organization’s needs.
The Cloud Security Alliance’s Security Guidance for Critical Areas of Cloud Computing provides excellent details of what security controls should be implemented based on the customer organization’s needs and the type of cloud provider platform.
The core controls that a customer organization should ensure are in place are as follows.

Backups

Customer organizations should not assume that simply because their data is stored in the cloud there is no reason to worry about backing it up. Data can be deleted, lost, corrupted, or destroyed whether it is stored on traditional or cloud systems. When engaging with a Cloud Service Provider it is important to determine how the customer organization’s data is backed up, where is it backed up to (bearing in mind any compliance requirements), how the backups are secured, and how the backups can be accessed. It is also important to determine how long backups are held for and indeed the time taken to restore either all of the data or individual files. This information will be key to the customer organization as it adjusts its business continuity and disaster recovery plans to take into account the adoption of cloud services.

Secure Deletion

Data when deleted from disks is never fully deleted. Instead references to where that data is stored are removed so the operating system knows it can overwrite those areas. As such many data recovery and forensic tools can easily restore any deleted data. As data stored in the cloud can be located across different disks, across different systems, and across different data centers, it is important that the customer organization knows when data is deleted it is done so in a way to prevent it from being recovered. This is important in situations where customers are migrating from one service provider to another and need to ensure their data are properly and securely removed from the previous provider.

Secure Development

Engaging with the cloud in many cases involved accessing services, data, and systems via an interface or application. The complexity of these applications will depend on the cloud platform. In the IaaS platform, it may simply be a control panel, whereas in the SaaS environment it will be a full blown application. It is important therefore that the customer organization has assurances that these applications, interfaces, and control panels have been developed in a secure manner and that security is built into the development cycle as early as possible.
When engaging with a Cloud Service Provider customer organizations should get visibility into how security has been built into the Software Development Lifecycle (SDLC). It should seek assurances from the provider that their development team has regular training in developing secure code. The provider should also be conducting secure code reviews of their software to identify any potential security bugs in their code. Another area to be examined by the customer organization is to see how often the provider conducts threat tree analysis against their systems. Finally, the customer organization should determine from the Cloud Service Provider what the provider’s policies are regarding identifying vulnerabilities in its code, patching those vulnerabilities, and how it keeps customers abreast of these issues.
The customer organization should also discover what secure coding principles the Cloud Service Provider is using. There are a number of guides that are easily available for organizations to incorporate into their SDLC such as
▪ The Open Web Application Security Project (OWASP) Top 10 Project37;
▪ The Open Web Application Security Project (OWASP) Cloud Top 10 Security Risks38;
▪ SANs Top 25 Most Dangerous Software Errors39; and
▪ SafeCODE’s Practices for Secure Development of Cloud Applications.40

Data Encryption

One of the most effective security controls when engaged with the cloud is to implement encryption both when the data is at rest and when in transit. Customer organizations need to ensure that any data transmitted to and from the Cloud Service Provider is encrypted. This could be either by employing SSL to encrypt traffic as it traverses the Internet or by using a VPN to connect to the provider.
When data is stored (at rest) on the Cloud Service Provider’s systems it should also be encrypted. It is important to understand what encryption algorithms the Cloud Service Provider employs for this. Ideally the encryption algorithms should be industry standard and peer reviewed. Should the Cloud Service Provider offer its own in-house developed solution then this should be a cause of concern.
Encrypting data is not just about the algorithms used but also how the keys to encrypt and decrypt that data are managed. In a cloud environment, it is important that the customer organization considers whether they need to retain all access to the keys and that the provider cannot access them. It is important to note that if the cloud provider does not have access to keys (i.e., plain text data), it is going to be limited in what functionality (value) it can deliver (except when using homomorphism encryption in certain use cases).
Should the provider have access to the keys then it is possible the provider can also then use these keys to decrypt the data. Ideally any encryption solution should only be managed by the customer organization with the provider having no ability to generate its own keys or modify those of others.

Denial of Service Attack Mitigation

In recent years we have seen an increase in the use of Denial of Service (DOS) and Distributed Denial of Service attacks against various organizations. DOS attack is where attackers send some traffic to the targeted systems that they can no longer provide the service to legitimate users. Customer organizations should determine what mitigation tools and services the Cloud Service Provider has in place, not just to protect the provider’s own service, but also the instances of the service for the customer.

Security Monitoring

Recognizing that there is suspicious activity occurring against systems is key to being able to respond quickly and effectively. Logs provide security teams with the ability to identify potential attacks, be alerted to ongoing attacks, and help investigate an attack. In a tradition IT environment, it is possible to implement monitoring of security and other relevant logs. However, it is not as straightforward in a cloud environment. To determine how effective a customer organization’s incident detection and response will be in the cloud the organization needs to determine what visibility it will have to logs.
It may be a case that the organization will not have direct access to the logs and will have to rely on the Cloud Service Provider’s security team to monitor the services and report any suspicious activity to the customer. In this case, it is important the customer organization ensures this activity is included and managed within the SLA.
Should the Cloud Service Provider allow access to the logs for the customer then the customer needs to determine
▪ The level of access they can have to the logs. Will it be direct access or via an API?
▪ How the customer’s log data is isolated from another customer’s log data.
▪ How the customer will monitor those logs.
▪ The devices, such as firewalls, routers, servers, and switches, which should be configured to record events.
▪ The events that should be recorded for each type of system or component on the service.
▪ Where events should be stored. Will they be stored on system within the provider’s environment or will the customer store the events on their own premises?
▪ The retention policy for event logs and their details.
▪ How alerts are created in certain events, event patterns, or combination of events.
▪ What tools and utilities are implemented to monitor for these events.

Firewalls

Firewalls are security devices used to manage network traffic between two networks. In most cases firewalls are configured to only allow certain traffic through and to deny all other traffic. While it would be expected that a Cloud Service Provider would have firewalls in place, the customer organization should familiarize itself with the types of firewalls the provider employs and whether or not they satisfy the risk profile of the customer organization. Areas to consider are
▪ Whether or not the firewall is dedicated to the customer or shared among other clients?
▪ How often are the firewall rules reviewed regularly to ensure they are still applicable and required?
▪ Are changes to the firewall rules reviewed to ensure they do not conflict with other rules for the customer or indeed with other customers’ rules?
▪ How often are the latest software patches and security updates installed on the firewall?
▪ Are they regularly tested to ensure that they provide the level of security required?
▪ How often are the firewall configurations reviewed to ensure they are still applicable and appropriate?
▪ Are the firewalls monitored for security alerts?
▪ Are Web Application Firewalls in place or available for use by the customer?

Intrusion Detection Systems

While the Cloud Service Provider should have mechanisms in place to detect threats such as computer viruses, it should also have mechanisms in place to detect malicious or suspicious network traffic, such as an IDS or Intrusion Prevent System (IPS). An IDS can monitor network traffic for suspicious activity that may indicate an attack is taking place and raise an alert should it do so. An IPS is similar to an IDS system with the additional ability to automatically launch a number of prescribed actions to react and prevent the attack.
Customer organizations should determine if the risk profile requires the Cloud Service Provider to have an IDS or IPS in place.

Personnel Controls

The people who will be working with the systems and data are a key element in maintaining the security of those systems and data. Good security requires that all staff are properly trained in how they use and interact with the systems they are using to prevent untrained people corrupting any data. Good security training should enable staff to better understand the risks involved in working with such systems and data and how they can help minimize those risks. It also requires that those charged with securing the systems and/or data are properly trained, skilled, and experienced in the technologies and the disciplines required for their role.
While an organization can manage the above responsibilities with its own staff, it does not have the same direct control with the staff of the Cloud Service Provider. The customer organization in that case should determine from the provider how the following areas are dealt with:
▪ Background checking
What background checking does the Cloud Service Provider conduct when hiring new staff, be they permanent, temporary, or contractors? It is important to know how in-depth those background checks are to determine and verify (where possible) the following:
▪ Employment history
▪ Educational qualifications
▪ Criminal background checks, in particular those that may be most relevant for the role such as convictions relating to fraud and computer crime. Note that in some countries it is not possible for companies to conduct criminal background checks unless their business is in certain areas such as access to children, access to vulnerable people, or working on data related to certain government or financial institutions.
▪ Credit history, in particular to determine if the individual has a poor credit rating or that they may be in financial difficulty. If this is the case then that individual could be at a higher risk of committing fraud. As with criminal background checks, it should be noted that running credit history checks on employees may not be legal.

Insider Threat

People working within a company, be that the customer organization or the Cloud Service Provider, have trusted access to key systems and data. This access could be either accidentally or deliberately abused and the security of those systems and data undermined.
When engaging with a Cloud Service Provider a customer organization needs to realize that the insider threat not only comes from within their own organization but now extends to the staff of the Cloud Service Provider, and indeed any supplier or subcontractor the provider engages with.
It is important therefore that customer organization gets assurances from the Cloud Service Provider that it is monitoring the insider threat and has controls in place to reduce the risk. These controls could be ensuring access controls are properly in place and maintained, there is segregation of duties clearly outlined and managed, that privileged access is granted only on a need to know basis, that access to systems is closely monitored, and that regular reviews of access rights are conducted.
The CERT Insider Threat Center41 run by CERT/CC,42 which is part of Carnegie Mellon University, has a lot of additional research available in this area.
▪ Security Awareness
Making staff aware of the security threats that face an organization and how they can manage those threats is a key element in maintaining security of key data and systems. Customer organizations should get visibility and be satisfied that the Cloud Service Provider is running an effective security awareness program. In particular the provider should ensure that staff at different levels and from different parts of the business receive security awareness training appropriate to their role.
Securing the cloud is not a task that organizations should leave to the Cloud Service Providers to do alone. The customer organization is responsible for its data and its services and therefore needs to work with the Cloud Service Provider to ensure that the appropriate security controls are in place to protect those data and systems. Securing the cloud is a shared responsibility and can only be properly achieved by a collaborative effort by all parties.
This chapter has highlighted some of the areas that customer organizations should consider when migrating data to the cloud. More comprehensive details and recommendations can be found in the Cloud Security Alliance’s “Security Guidance for Critical Areas of Cloud Computing.”43 In addition, the ENISA provides excellent advise in its “Procure Secure—A Guide to Monitoring of Security Levels in Cloud Contracts”44 to ensure those controls are performing and working as expected.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset