Chapter 10

The Future Cloud

Abstract

Cloud computing is evolving, and this chapter considers its role within critical national infrastructure as well as what will be required to secure such critical assets. It is intended to view into the components required to secure the cloud of tomorrow.

Keywords

Cloud Broker; Critical infrastructure; ICS; SCADA
A Look into the Future
▪ Cloud computing for critical infrastructure
▪ Defining the security requirements for tomorrow’s cloud
There is a sense of trepidation when trying to forecast anything technology related, there have been many great names that have tried and failed spectacularly when attempting to predict the future of the technology industry. There are, however, some emerging trends that even we would feel comfortable in predicting their direction; as follows:
▪ In the future there will be more users connected to the Internet.
▪ In the future there will be more devices connected to the Internet.
▪ In the future there will be more data, all of which will need to be stored, processed, and of course secured.
These three statements are hardly going to surprise anybody, they would certainly be classed as the safest predictions we can make, and unlikely to result in our entry into the multiple Web sites highlighting failed but rather amusing technology predictions. However, with these emerging trends beginning to be realized, we have to ask ourselves what role the cloud will play. To answer this question, we need to understand the scale of the emerging trends.

More, More, and More

Regardless of the source, all evidence points to more data, users, and devices:
▪ IDC predicts the installed base of things connected will be 212 billion by the end of 2020, including 30 billion connected autonomous things.1
▪ Cisco predicts that the number of network-connected devices will be more than 15 billion, twice the world’s population, by 2015.2
▪ Today’s Internet has 2.09 billion users [cited Oct 2011]; by 2020, global Internet access will probably have risen to nearly 5 billion users.3
▪ The global big data market will show a 26% compound annual growth rate from 2013 to 2018.4
The above predictions are of course only a small snapshot of forecasts reinforcing the belief that we will witness more devices, data, and connected users in the future. However, it was not entirely necessary to reference external sources to reinforce the predictions; a simple look into our homes would likely have been sufficient. Once the home consisted of a single desktop and a dial-up modem to connect to the Internet; now it is not uncommon for the average home to have at least 3–5 devices. Perhaps more telling are the types of devices that are now being connected; these go beyond the traditional information technology (IT) devices such as laptops, tablets, and smartphones.
This is illustrated by the results of Project SHINE (SHodan INtelligence Extraction), which clearly demonstrate the types of devices and their functions are clearly evolving, and fall well outside of the traditional IT devices. Designed to understand the supervisory control and data acquisition (SCADA) and industrial control system (ICS) that are accessible from the Internet, it was reported as of September 2013 that “The average number of new SCADA/ICS devices found every day is typically between 2000 and 8000. So far we have collected over 1,000,000 unique IP addresses that appear to belong to either SCADA and control systems devices or related software products.”5 When we consider the types of devices that are being discovered, these include
▪ medical devices
▪ traffic management systems
▪ automotive control
▪ traffic light control (includes red-light and speeding cameras)
▪ heating, ventilation, and air conditioning/environment control
▪ power regulators/uninterruptible power supplies
▪ security/access control (includes closed-circuit television and webcams)
▪ serial port servers (many of which include Allen-Bradley DF1 capable protocols)
▪ data radios (point-to-point 2.4/5.8/7.8 GHz direct-connected radios)
As we can clearly see these are outside the general sphere of traditional IT, but the question now becomes, what role will cloud computing play in the future? According to an iView6 published by analyst firm IDC entitled the Digital Universe in 2020, the type of information that will be stored in this future cloud will include considerably more than personal computers, phones, and consumer electronics, as depicted in Figure 10.1. According to IDC, it is predicted that the digital universe in 2020 will consist of 40 trillion gigabytes of data, with cloud computing forecasted to touch approximately 40% of information within this universe, with 15% stored/maintained within the cloud.
image
FIGURE 10.1 Growing role of cloud computing. EB, Exabyte.
With cloud computing therefore touching 1 in 2 information assets, and extending into processing, storage, and in some cases the management of devices within a critical infrastructure perspective, the need for cloud security has never been greater.

Cloud Computing for Critical Infrastructure

Broadly speaking, for many organizations, particularly those that operate within a critical infrastructure environment, there are three distinct zones. We can of course argue the definition of the term critical infrastructure but by and large these are those organizations that operate within industries that are critical to society.
Figure 10.2 provides a graphical illustration of the three zones, these are defined as Corporate IT, Command and Control (that includes those SCADA and ICS devices), and the Device network. Cloud computing has typically operated within the Corporate IT network, which experiences challenges we all read about from Bring Your Own Device to integration of cloud computing. The question of course is the role of cloud computing when connecting and enabling the Command and Control, and Device zones. Particularly as these zones involve assets that are critical not only to the organization, but also to the society with which it serves. That is not to say that IT assets are not important, but in this particular context not critical.
image
FIGURE 10.2 Zonal approach to critical infrastructure.
Historically, and still in operation by many organizations today, is the belief that an air gap between the IT and Command and Control zones is all that is required to protect the critical assets. Without delving into the merits of this approach, there is an emerging trend to enable connectivity between these zones for business purposes. For example, within the Oil and Gas sector, the migration to smart oil fields allows the remote management of drilling operations that can and is leading to increased productivity through more oil production. What this means is that the risks the IT network has been facing for over the last 10–15 years are going to manifest themselves into critical zones. Of course, in the same vein there will also be opportunities that can hugely lead to efficiency gains such as the smart oil field, and of course cloud computing.
We have focused on the many benefits the cloud can provide to businesses, and there is nothing to suggest that these same benefits can be realized for ICS and SCADA environments (or commonly the Command and Control zone). Many of these benefits were detailed by Trend Micro, in the report entitled “SCADA in the Cloud: A security conundrum?”7 Within the paper it detailed benefits of using the cloud within SCADA/ICS environments as the following:
image
FIGURE 10.3 SCADA in the cloud. SCADA, supervisory control and data acquisition.
▪ Redundancy and flexibility benefits: Cloud environments allow the ability to establish infrastructures considerably quicker than internally hosted systems, making the redundancy easier to resolve.
▪ Disaster recovery and automated updates: The ability to resolve issues quicker within cloud environments as opposed to organizations in noncloud environments. Indeed, cloud-enabled businesses are able to resolve issues within 2.1 h as opposed to 8 h.
Within the whitepaper, two architectures were proposed and graphically depicted in Figure 10.3. On the left-hand side of the graphic, SCADA applications are deployed on premise with data pushed to the cloud for analytics, and further access. The right-hand graphic has the SCADA applications hosted entirely within the cloud. Of course, each scenario has its own advantages and security risks. In the first scenario, there is the risk of data being compromised (confidentiality) within the cloud. These risks can either be for the data stored, or the data in transit (while being transferred between the application and cloud). The command and control element remains on premise, so the existing risks associated with securing an SCADA/ICS environment remains. In the latter example, however, there are additional risks that include data interception, but the implications are more significant than the first example. While of course confidentiality is a concern, there is also the risk of data being intercepted, modified, and replayed. This of course introduces integrity risks and the prospect of devices accepting unauthorized commands. The implications of such an incident occurring are of course significant, and while the earlier examples of MegaUpload becoming a significant inconvenience for its customers can be significant, there is no question that a major cyber event detrimentally impacting critical assets will be considerably more significant.
While this particular section is written into the final chapter, and indeed a chapter entitled the Future Cloud, it would lead one to believe that this migration is something we can experience in the future. However, there are many service providers already offering cloud-based SCADA solutions for critical infrastructure environments. Moreover, the cost savings can be significant. For example a “typical new in-house SCADA system for a small water treatment facility can have an upfront capital cost of about $11,500 for software, computer, telemetry, programming and setup. Compared to the initial approximate $1,600 cost of getting started with a cloud-based SCADA system, users can achieve about 90 percent reduction in costs.”8 This example is just the tip of the iceberg, with many automation companies expanding their portfolio for critical infrastructure customers to leverage cloud computing. In May 2013, for example, it was announced that automation vendor ABB and GlobaLogix partnered9 to offer a Software-as-a-Service solution for the Oil and Gas sector. According to Sandy Taylor, Head of ABB’s Oil, Gas and Petrochemical business unit, the “cloud-based SCADA infrastructure can save companies money and time by eliminating the need to build and maintain their own dedicated server rooms, while reducing SCADA administration costs and overall risks…The overall return on investment time can be reduced by 25–30% or more.”
Indeed, the benefits that cloud computing can bring to critical infrastructure operators extend beyond simply monetary savings. According to Hitachi Data Systems, in their 2012 Whitepaper entitled “How to improve Healthcare with Cloud Computing”10 the future of health care will be greatly aided by cloud computing: “electronic medical records, digital medical imaging, pharmacy records and doctor’s notes are all consolidated and accessible. The ability of researchers to run analytics, better treatment options, optimal insurance programs and the possibilities of truly personalized healthcare have become a reality. Data drives the new healthcare world and access is greater than ever before. Big data becomes better managed due to cloud technology, as storage, compute power and consolidation reach levels never before achieved. Portability of data delivers information where it is needed, when it is needed.”
While of course the benefits are very obvious and indeed for IT professionals have been realized for many years, they are now being commercially realized by Critical National Infrastructure (CNI) organizations. However, according to the European Network Information Security Agency (ENISA), in their report entitled “Critical Cloud Computing; A CIIP perspective on cloud computing services,”11 dated December 2012, this is in fact a double-edged sword: “On the one hand, large cloud providers can deploy state of the art security and business continuity measures and spread the associated costs across the customers. On the other hand, if an outage or a security breach occurs then the consequences could be big, affecting many citizens, many organizations, at once.” Moreover, we must also consider the impact of a major outage; within an IT environment the implications can be significant, for example, as was experienced by MegaUpload customers or indeed Amazon.com who were reported to have rejected US and Canadian customers when Amazon Web Services experienced an outage within its US-EAST data center for 59 min.12
Recognizing the role that cloud computing plays with regard to critical infrastructure has also been clearly identified by lawmakers. More specifically within Europe, and under the Network and Information Security (NIS) Directive, the scope has been defined as including cloud computing providers that support critical infrastructure operations. The inclusion of cloud computing within the scope of the NIS Directive is due to the risks to critical infrastructure, which according to ENISA are magnified due to the concentration of Information and Communications Technology (ICT) resources. In the event of a large-scale disruption, the consequences could be significant and impact many citizens and organization alike. Indeed, the impact is likely to be considerably greater due to the concentration of services, although it is worth noting that the likelihood will decrease.

Defining the Security Requirements for Tomorrow’s Cloud

What the preceding paragraphs clearly highlight is that cloud computing is beginning to find its way into sectors that traditionally were not under the control of the IT department. However, while there are many benefits, the impact should incidents occur can be more significant than what we have witnessed thus far.

Dynamic Attestation

With considerably more at stake, should there be not be a different approach regarding assurance of third-party services than what is being used today? In other words, is it acceptable to rely on annual assurance statements from third-party auditors (or even self-certification) when something as important as the energy network is reliant on maintaining service?
The answer of course is probably not, and this is partly why so many regulators and lawmakers globally are placing greater scrutiny on technology that supporting critical infrastructure.
Defining the specific assurance requirements will not be easy, mainly because different environments will require differing levels of assurance. However, one thing is clear that in the first instance greater transparency is necessary than simply relying on annual attestations. It is often cited that the use of cloud computing is like managing any other third party; there are, however, some challenges with this particular statement. Figure 10.4 graphically depicts a simple illustration of the supply chain related to a smart grid environment.
Within this environment you have the end customer sitting within ring 0, and as we extend out into further rings 1, 2, and so on, we can add multiple stakeholders within the broader supply chain. Within ring 0, however, the greatest level of transparency exists, roughly translated as those assets within my own control afford me the greatest transparency. As we begin to extend out into further rings, the level of transparency falls.
Within a cloud environment, however, the level of transparency is likely to be lower than a traditional outsourced contract (for the many reasons detailed earlier in the book, but effectively as the volume of customers and right to audit is generally removed). Subsequently, there will be a need to not only provide assurance but also to do so in a manner that is real time. The need for dynamic attestation will be driven largely due to the potential impact and prospect of regulatory requirements and their associated penalties (not only financial, with mandatory breach notification). This requirement can be translated as providing a mechanism that can give transparency and assurance on demand, and more importantly provide the necessary intelligence to proactively anticipate potential threats.

Third-Party Access

We are of course not proposing that the current cloud does not provide a mechanism for third-party access; however, it is suggested that the volume and subsequently likely sought granularity regarding permissions will be vastly different to today. Let us take one simple use case and combine one of the previous predictions, namely, the likely growth in data.
image
FIGURE 10.4 Growing supply chain. ISVs, Information security vendors.
The use case to illustrate this requirement, why the smart grid of course (note that this particular use case is entirely due to its relevance and has nothing to do with the fact that coauthor Raj Samani’s efforts in his previous book is conveniently titled “Applied Cyber Security and the Smart Grid” written with Eric D. Knapp)!
When people consider the smart grid, the default response is the smart meter. While there is considerably more to the grid than simply just meters, for the purposes of the example the default position is appropriate. With many governments around the world committing to deploying a meter in every home within the next 5–10 years (e.g., in Denmark the Minister for Climate has announced an act requiring utilities to install meters in every household13), the number of meters collecting data in every home will be expected to grow. According to Pike Research, the number of meters will grow to 535 million units by 2015, reaching a total number of installations of 963 million by the year 2020.14 Combine the total number of meters, and the fact that these meters will be collecting data of energy consumption in every installation (known as the polling interval, which in some cases is as low as every 2 s), then we can see why with the earlier prediction of more data is looking a fairly safe bet.
With utilities collecting such a wealth of valuable data, there is absolutely no question that these data would be of enormous value for third parties; for example, consider a retailer looking to sell a washing machine; knowing what model of machines people use within the homes would be of great commercial interest. Putting aside the privacy considerations for one moment (this is covered in detail within the previously mentioned book, which covers the legal and regulatory requirements), and that many utilities are vowing not to share,15 where transparency and explicit consent is gained we will witness huge demands for access to data gathered by these meters. Furthermore, the level of granularity and access control will have to contend with complex privacy rules so that data that uniquely identifies an individual are obfuscated, for example, and where an end customer wishes to, for example, purchase a new washing machine based on a communication from the retailer, then deanonymize their details.
With cloud computing seen as an integral part of the smart meter implementation demonstrated by utilities confirming migration of systems within their smart grid implementations to be hosted by Cloud Service Providers (CSPs), this particular use case is a good example of not only the requirements but also the direction of the future cloud. In this example, one of the key requirements for the future cloud will be to manage an enormous volume of third-party access requirements; this extends the current access requirements both in terms of volume as well as complexity. Other examples outside of the smart grid include the role of cloud computing within health care. According to research firms, the cloud computing market for health care purposes is predicted to reach $5.4 billion by 2017.16 One particular use of cloud computing for health-related purposes will be for the storage of electronic health records (EHRs), which are likely to increase due to the requirements set out by the Patient Protection and Affordable Care Act. The act requires all US citizens to sign up for health insurance, which in itself will dramatically increase the number of medical records that facilities will have to support. Moreover, with a greater number of records, the medical facilities will need to ensure accessibility of the data by multiple third parties, including health insurance providers, medical researchers, and clinical staff. While there is quite rightly considerable interest in the use of cloud computing, particularly for research purposes, there exist privacy considerations when such data become accessible by third parties. Therefore, much like the smart grid example, the need to pseudonymize/anonymize data before providing access to third parties is necessary. This may include any number of stakeholders, and can include stakeholders undertaking the following roles (taken from the National Health Service (NHS) Care Record Guarantee17):
▪ check the quality of care (such as a clinical audit);
▪ protect the health of the general public;
▪ keep track of NHS spending;
▪ manage the health service;
▪ help investigate any concerns or complaints you or your family
▪ have about your health care;
▪ teach health care professionals; and
▪ help with research.
Dependent on the role of the individual, access to the EHR may need to obfuscate particular fields. For example, the Health Insurance Portability and Accountability Act (HIPAA) requires explicit consent from the data subject where the data are not used for treatment purposes:

The Privacy Rule protects all personally identifiable health information, known as protected health information (PHI), created or received by a covered entity. Personally identifiable health information is defined as information, including demographic information, that “relates to past, present, or future physical or mental health or condition of an individual, the provision of health care to an individual, or the past, present, or future payment for the provision of health care for the individual” that either identifies the individual or with respect to which there is a reasonable basis to believe the information can be used to identify the individual.

45 C.F.R. § 160.103

Restrictions on Use and Disclosure

Covered entities may not use or disclose PHI except as permitted or required by the Privacy Rule13. A covered entity may disclose PHI without the individual’s permission for treatment, payment, and health care operations purposes. For other uses and disclosures, the Privacy Rule generally requires the individual’s written permission, which is an “authorization” that must meet specific content requirements.18

Within this scenario, access to third parties may require explicit consent from the data subject. However, there are additional complications to this principle, where the Privacy Principle permits the disclosure of Personal Health Information to specific stakeholders without consent under specific circumstances. Subsequently, when we consider the level of granularity associated with data access to EHRs that will be stored within a cloud environment, effectively dependent on the role of the requesting party, the data may need to be obfuscated to remove personal identifiers. However, under specific circumstances, any obfuscation will not be conducted where the stakeholder meets the specific conditions not requiring authorization (e.g., for research purposes this is covered under 45 C.F.R. § 164.512). Alternatively, where explicit authorization is granted then any controls to obfuscate PHI are equally not applied.
All sounds rather simple, does it not?
While there is no intention of delving into the details of HIPAA, or indeed any other industry vertical regulation, it does demonstrate that as cloud computing becomes more ubiquitous and indeed used more within highly regulated industries, there will be a need to add considerably more granular controls for third-party access. Moreover, the volume of requests or rather the disparate nature of those requesting access will only increase. Therefore, the future cloud will need to consider the context behind specific requests for data, and ideally dynamically obfuscate specific fields dependent on this context. In addition, the likelihood is that the future cloud will have to support stronger authentication methods than the simple password. This of course is not a future cloud consideration with multiple providers already offering such services, but certainly will likely be a requirement to support sensitive data.

Real-Time Assurance

We briefly touched on the real-time dependency of future cloud customers to support critical operations. This was used to illustrate the requirement of dynamic attestation, wherein an end customer can automatically verify the security posture of assets hosted within a cloud environment. Another critical requirement related to real-time assurance is to proactively verify security maturity, leveraging hardware to guarantee the integrity of assurance provided.
In many instances, assurance for the end customer is derived through service level agreements (SLAs) but according to research firm Heavy Reading, “The key for Cloud Service Providers is to find a service assurance solution that can monitor the cloud infrastructure at all levels while preemptively managing subscriber experience at the application level.”19 While for particular assets hosted in the cloud, there may not be a need for real-time quality of service attestation that the SLAs are met, when we consider the critical infrastructure environment discussed earlier it is likely the risk appetite for the end customer will demand real-time attestation.
Examples of such technologies are now becoming available in the market. Without wishing to appear as an endorsement, one such example is that provided by Intel through the trusted execution technology (TXT) capability.

Intel TXT

Available from 2007, Intel TXT20 was released with the intention of providing a verified launch, utilizing a measured launch environment (MLE) that refers to a known good launch configuration. Any changes to this verified launch environment can be detected via cryptographic verification (hash or signed). Also, there is the ability to leverage the hardware to remove any residual data when the MLE is not correctly shutdown. Figure 10.5 shows how TXT works to protect a virtual server environment.
image
FIGURE 10.5 TXT trusted pools.
For a cloud-related environment, there are many advantages to the deployment of such a hardware-assisted approach to validate integrity. For example, the end customer can receive validation that a trusted hypervisor has been launched. Achieving this level of integrity may seem excessive for virtual machines supporting nonsensitive data/services but for highly critical environments is likely necessary. One such example of where such an environment is appropriate is under the concept known as “trusted pools,” where only trusted hosts can participate within such pools. Polices can then be used to prevent unverified hosts access to the pool; this ensures that any potential compromised hosts are not allowed access to the trusted pool and negatively impact the trusted hosts.

End to End Validation

One of the earlier predictions centered on the increase in devices we are witnessing and will further witness within the Internet of tomorrow. Emerging challenges for end customers within this new world is the need for absolute validation that an incoming request has not been tampered with, and the integrity of the request is maintained. This is particularly important within a critical infrastructure environment, where the likelihood is that interactions will be undertaken between devices without any interaction with a human. This of course provides an opportunity to utilize the hardware between the machines (both within the cloud and communicating with the cloud), and to do so without the uncertainty of utilizing a user who could be manipulated, tricked, or bribed.
Establishing this hardware root of trust can be achieved with the use of solutions provided through the advent of technology being introduced into the market. Of course, there is no intention within this publication to endorse any commercial offerings; however, the following solution is significant and warrants further analysis.

CSP Future Requirements

Summarizing the requirements defined in preceding paragraphs, the future cloud will likely demand the following:
▪ Dynamic attestation: The ability to automatically query security maturity/compliance with agreed SLAs.
▪ Third-party access: Supporting high volumes of third-party requests, with granular access control models. In addition to the granularity of the access control, the ability to consider the context behind the request will be paramount in achieving compliance within highly regulated industries.
▪ Real-time assurance: Proactive attestation of security compliance undertaken in real time.
▪ End-to-end validation: Achieving a greater degree of assurance of the integrity behind the request to access cloud provisioned services.
While the above requirements for the future cloud are important, and indeed becoming more of an emerging series of requests from end customers, they by no means cover the entirety of the requirements for the future cloud. This is because there are many requirements emerging that are broader than just those within the control of the Cloud Service Provider (CSP).

Cloud Ecosystem Requirements

As we covered in the preceding chapter, some of the evolutions required for the future cloud involve technological innovation and adoption by CSPs. However, this is only the tip of the iceberg (and indeed the reader may have some recommendations that we may have omitted), with many requirements considerably broader than those within the control of the provider. One such example is achieving greater clarity regarding the complicated standards landscape when it comes to cloud computing.

Cloud Computing Standards Complexity

In November 2013, the European Commission on the European Cloud strategy published a final report entitled “Cloud Standards Coordination.”21 The purpose of the report was to review major aspects of cloud computing providing analysis into the standards landscape, which was perceived as a jungle of standards. The conclusion, however, was that the “cloud standards is large but not chaotic and by no means a jungle.” Analysis undertaken summarized “20 relevant organizations in cloud computing Standardization and a selection of around 150 associated documents” to support the claim.
There are, however, a number of gaps identified within the cloud standards landscape; these are as follows.
Interoperability
The report concludes a lack of standards related to management specifications related to Platform as a Service (PaaS) and Software as a Service (SaaS). In particular, the report concludes that while proprietary solutions do exist, the implications are that this would generate vendor lock-in situations, which have been identified as a major concern for end customers of cloud. In addition, the report concludes a lack of standards associated with service metrics, and standards to provide monitoring data.
Security and Privacy
Security is recognized as integral for the wide-scale adoption of cloud computing; however, the assessment across multiple standards identified a lack of a common vocabulary to allow the end customer to express specific requirements as well as understanding the service offerings across multiple providers. Furthermore, there is a need for further metrics related to cloud computing.
Specific standards associated with accountability and cloud incident management citing the example of SLA infringement have been identified as areas that demand further standardization efforts.
Service Level Agreements
A main gap associated with cloud computing standards is the requirement for standardization relating to SLAs. In particular,
▪ Agreed terminology and definitions for service level objectives
▪ Metrics for each service level objective.
Regulation, Legal and Governance Aspects
Relating to the legal landscape, the report concludes the need for an international framework and governance, with associated global standards. The current landscape is built on national, and panregional (e.g., European Union) requirements; however, the global nature of cloud computing demands broader interoperable requirements.
The above activities undertaken by European Telecommunications Standards Institute (ETSI) are only a small snapshot into activities being undertaken in Europe, and indeed a smaller snapshot of global activities to establish the appropriate frameworks and tools necessary for the future cloud. Additional examples include
▪ Code of conduct for data protection: agreed a code of conduct for cloud computing providers to support a uniform application of data-protection rules.
▪ SLAs: Work to define a skeleton structure for cloud SLAs, identifying the components commonly found in cloud contracts and the most important elements for cloud SLAs, and propose a subset of these elements to focus on.
▪ Cloud computing contracts: identification of safe and fair contract terms and conditions for cloud computing services for consumers and small firms.
The above are related to the efforts within Europe, and specifically within efforts undertaken by the European Commission. Within the Cloud Security Alliance there are a number of research initiatives underway to address the above issues, as well as those raised within preceding chapters. A full list of these is available from within the Cloud Security Alliance Web site, with a summary included in Chapter 8 (https://cloudsecurityalliance.org/research/).

Cloud Broker Services

It is, however, worth drawing attention to an area of the cloud ecosystem that is an emerging area of focus, namely, cloud broker services. Within the CSA, a new working group is being established to define security best practices to brokers, as well as provide life cycle management for cloud brokerage services. According to National Institute of Standards and Technology (NIST) (SP 500-292), a cloud broker is “an entity that manages the use, performance and delivery of cloud services, and negotiates relationships between cloud Providers and cloud Consumers.” It is anticipated the cloud broker market will see significant growth as awareness of its benefits permeate across potential customers. The many benefits of cloud brokers, is that they have the potential to eliminate many of the concerns end customers have with cloud computing as well as simplifying the overall process by which multiple CSPs are managed. For example, a policy may exist to ensure that all data leaving the internal network and being sent to the public cloud providers must not contain credit card numbers. The broker should not only be able to leverage some form of data loss prevention to inspect data, but also apply policy rules (e.g., do not allow these data to leave the enterprise, or allow but encrypt). The role of the broker is extended with announcements from CSPs that allow end customers the ability to provide their own keys to encrypt data on the cloud service. If we use Amazon S3 as an illustration, “In between, it is up to you to manage your encryption keys and to make sure that you know which keys were used to encrypt each object. You can store your keys on-premises or you can use AWS Cloud HSM, which uses dedicated hardware to help you to meet corporate, contractual and regulatory compliance requirements for data security.”22 Where multiple providers are being used, and specific regulatory requirements addressing the geographic location of keys (e.g., RIPA Part III), customers may look to leverage brokers for simplified key management. Such an approach will mean that for the end customer, the complexity required to assure cloud-provisioned services can be handled by the broker. In fact, this should eliminate or at least reduce the risk of vendor lock-in, and if handled appropriately ensure that regulated data are not transferred to locations that do not meet policy or applicable laws. Of course, if the broker is provided as-a-service, then this only pushes these concerns further downstream; in other words, it will be a broker lock-in and not a CSP lock-in. Regardless, this may be a risk that is accepted or alternatively an on-premise broker may be used.
According to Gartner Cloud Service, brokers can be divided into three categories:
▪ Cloud service intermediation: Intermediary that adds value to a service through the provision of additional capabilities. Examples will include services such as identity and access management where the end customer could acquire such capabilities to enhance the currently provisioned cloud services. The broker, however, will remain independent of the provisioned service provider.
▪ Aggregation: A brokerage service that combines multiple services into one, or new services. An example of this may be a broker that combines all of the offerings from the multiple cloud offerings that are being used, and offers a simple interface to manage the resources across all of the providers.
▪ Cloud service arbitrage: Although the aggregation service is likely to be fixed, the arbitrage category would automatically provide flexible aggregate choices to the end customer. What this means is that the broker would automatically select the most appropriate CSP on behalf of the customer, migrate the workload, and allow the customer to benefit financially.
It is predicted for cloud service brokers as an industry to double in size, reaching $141bn by the year 2017.23 For this reason, the CSA is establishing the Cloud Broker Working Group to address these challenges, and to establish cloud governance best practices, documenting use cases, identifying security standards requirements (for example, integration into the Cloud Controls Matrix (CCM) or Consensus Assessments Initiative Questionnaire (CAIQ)), as well as other areas for potential research as it applies to brokers. Cloud computing will continue to evolve, and introduce considerably more key stakeholders, which will likely demand further development of security standards, and guidance to ensure they do not become the weakest link.
This and other CSA research initiatives are open to volunteers wishing to provide their expertise into the various deliverables. Therefore, the reader is encouraged to get involved, and participate time permitting of course.
Cloud computing is evolving, and the number of use cases are growing exponentially. In certain cases, these use cases involve critical operations, to the hosting and management of systems keeping the lights on to treating the water delivered to consumer’s homes. The need for a safe and secure cloud has therefore never been so important. Therefore, your support and expertise is greatly appreciated.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset