CHAPTER 2
The Business of Cloud Computing

In this chapter, we evaluate the business impact of Cloud Computing.

We start by outlining the IT industry's transformation process, which historically took smaller steps—first, virtualization and second, moving to Cloud. As we will see, this process has taken place in a dialectic spiral, influenced by conflicting developments. The centrifugal forces were moving computing out of enterprise—“Shadow IT” and Virtual Private Cloud. Ultimately, the development has synthesized into bringing computing back into the transformed enterprise IT, by means of Private Cloud.

Next, we move beyond enterprise and consider the telecommunications business, which has been undergoing a similar process—known as Network Functions Virtualization (NFV), which is now developing its own Private Cloud (a process in which all the authors have been squarely involved).

The Cloud transformation, of course, affects other business sectors, but the purpose of this book—and the ever-growing size of the manuscript—suggests that we draw the line at this point. It is true though that just as mathematical equations applicable to one physical field (e.g., mechanics) can equally well be applied in other fields (e.g., electromagnetic fields), so do universal business formulae apply to various businesses. The impact of Cloud will be seen and felt in many other industries!

2.1 IT Industry Transformation through Virtualization and Cloud

In the last decade the IT industry has gone through a massive transformation, which has had a huge effect on both the operational and business side of the introduction of new applications and services. To appreciate what has happened, let us start by looking at the old way of doing things.

Traditionally, in the pre-Cloud era, creating software-based products and services involved high upfront investment, high risk of losing this investment, slow time-to-market, and much ongoing operational cost incurred from operating and maintaining the infrastructure. Developers were usually responsible for the design and implementation of the whole system: from the selection of the physical infrastructure (e.g., servers, switching, storage, etc.) to the software-reliability infrastructure (e.g., clustering, high-availability, and monitoring mechanisms) and communication links—all the way up to translating the business logic into the application. Applications for a given service were deployed on a dedicated infrastructure, and capacity planning was performed separately for each service.

Here is a live example. In 2000, one of the authors1 created a company called Zing Interactive Media,2 which had the mission to allow radio listeners to interact with content they hear on the radio via simple voice commands. Think of hearing a great song on the radio, or an advertisement that's interesting to you, and imagine how—with simple voice commands—you could order the song or interact with the advertiser. In today's world this can be achieved as a classic Cloud-based SaaS solution.

But in 2000 the author's company had to do quite a few things in order to create this service. First, of course, was to build the actual product to deliver the service. But on top of that there were major investments that were invisible to the end user:3

  1. Rent space on a hosting site (in this case we rented a secure space (a “cage”) on an AT&T hosting facility).
  2. Anticipate the peak use amount and develop a redundancy schema for the service.
  3. Specify the technical requirements for the servers needed to meet this capacity plan. (That involves a great deal of shopping around.)
  4. Negotiate vendor and support contracts and purchase and install enough servers to meet the capacity plan (some will inevitably be idle).
  5. Lease dedicated T14 lines for connectivity to the “cage” and pay for their full capacity regardless of actual use.
  6. Purchase the networking gear (switches, cables, etc.) and install it in the “cage.”
  7. Purchase and install software (operating systems, databases, etc.) on the servers.
  8. Purchase and install load balancers, firewalls, and other networking appliances.5
  9. Hire an IT team of networking experts, systems administrator, database administrator, and so on to maintain this setup.
  10. (Finally!) Deploy and maintain the unique software that actually delivered Zing Interactive Media's service.

Note that this investment had a huge upfront cost. This was incurred prior to the launching of the service and provided no differentiation whatsoever to the product. Out of necessity, the investment was made with the peak use pattern in mind—not even the median use pattern. And even with all these precautions, the investment was based on an educated guess. In addition, as the service succeeded, scaling it up required planning and long lead times: servers take time to arrive, access to the hosting site requires planning and approvals, and it takes weeks for the network provider to activate newly ordered communication links.

We will return to this example later, to describe how our service could be deployed today using the Cloud.

The example is quite representative of what enterprise IT organizations have to deal with when deploying services (such as e-mail, virtual private networking, or enterprise resource planning systems). In fact, the same problems are faced by software development organizations in large companies.

When starting a new project, the manager of such a development follows these steps:

  1. Make an overall cost estimate (in the presence of many uncertainties).
  2. Get approvals for both budget and space to host the servers and other equipment.
  3. Enter a purchase request for new hardware.
  4. Go through a procurement organization to buy a server (which may take three months or so).
  5. Open a ticket to the support team and wait until the servers are installed and set up, the security policies are deployed, and, finally, the connectivity is enabled.
  6. Install the operating system and other software.
  7. Start developing the actual value-added software.
  8. Go back to step A whenever additional equipment or outside software is needed.

When testing is needed, this process grows exponentially to the number of per-tester dedicated systems. A typical example of (necessary) waste is this: when a software product needs to be stress tested for scale, the entire infrastructure must be in place and waiting for the test, which may run for only a few hours in a week or even a month.

Again, we will soon review how the same problems can be solved in the Cloud with the Private Cloud setup and the so-called “Shadow IT.”

Let us start by noting that today the above process has been streamlined to keep both developers and service providers focused only on the added value they have to create. This has been achieved owing to IT transformation into a new way of doing things. Two major enablers came in succession: first, virtualization, and, second, the Cloud itself.

Virtualization (described in detail in the next chapter) has actually been around for many years, but it was recently “rediscovered” by IT managers who looked to reduce costs. Simply put, virtualization is about consolidation of computing through the reuse of hardware. For example, if a company had 10 hardware servers, each running its own operating system and an application with fairly low CPU utilization, the virtualization technology would enable these 10 servers to be replaced (without any change in software or incurring a high-performance penalty) with one or two powerful servers. As we will see in the next chapter, the key piece of virtualization is a hypervisor, which emulates the hardware environment so that each operating system and application running over it “thinks” that it is running on its own server.

Thus, applications running on under-utilized dedicated physical servers6 were gradually moved to a virtualized environment enabling, first and foremost, server consolidation. With that, fewer servers needed to be purchased and maintained, which respectively translated into savings in Capital Expenditure (CapEx) and Operational Expenditure (OpEx). This is a significant achievement, taking into account that two-thirds of a typical IT budget is devoted to maintenance. Other benefits include improvements in availability, disaster recovery, and flexibility (as it is much faster to deploy virtual servers than physical ones).

With all these gains for the providers of services, the consumers of IT services were left largely with the same experience as before—inasmuch as the virtualization setups just described were static. Fewer servers were running, with higher utilization. An important step for sure, but it did not change the fundamental complexity of consuming computing resources.

The Cloud was a major step forward. What the Cloud provided to the IT industry was the ability to move to a service-centric, “pay-as-you-go” business model with minimal upfront investment and risk. Individuals and businesses developing new applications could benefit from low-cost infrastructure and practically infinite scale, allowing users to pay only for what they actually used. In addition, with Cloud, the infrastructure is “abstracted,” allowing users to spend 100% of their effort on building their applications rather than setting up and maintaining generic infrastructures. Companies like Amazon and Google have built massive-scale, highly efficient Cloud services.

As we saw in the previous chapter, from an infrastructure perspective, Cloud has introduced a platform that is multi-tenant (supporting many users on the same physical infrastructure), elastic, equipped with a programmable interface (via API), fully automated, self-maintained, and—on top of all that—has a very low total cost of ownership. At first, Cloud platforms provided basic infrastructure services such as computing and storage. In recent years, Cloud services have ascended into software product implementations to offer more and more generic services—such as load-balancing-as-a-service or database-as-a-service, which allow users to focus even more on the core features of their applications.

Let us illustrate this with an example. Initially, a Cloud user could only create a virtual machine. If this user needed a database, that would have to be purchased, installed, and maintained. One subtle problem here is licensing—typically, software licenses bound the purchase to a limited number of physical machines. Hence, when the virtual machine moves to another physical host, the software might not even run. Yet, with the database-as-a-service offered, the user merely needs to select the database of choice and start using it. The tasks of acquiring the database software along with appropriate licenses, and installing and maintaining the software, now rest with the Cloud provider. Similarly, to effect load balancing (before the introduction of load-balancer-as-a-service), a user needed to create and maintain virtual machines for the servers to be balanced and for the load balancer itself. As we will see in Chapter 7 and the Appendix, the current technology and Cloud service offers require that a user merely specifies the server, which would be replicated by the Cloud provider when needed, with the load balancers introduced to balance the replicas.

The latest evolution of Cloud moves the support for application life cycle management, offering generic services that replace what had to be part of an application itself. Examples of such services are auto-deployment, auto-scaling, application monitoring, and auto-healing.

For instance, in the past an application developer had to create monitoring tools as part of the application and then also create an algorithm to decide when more capacity should be added. If so, the tools would need to setup, configure and bring on-line the new virtual machines and possibly a load balancer. Similarly, the tools would need to decide whether an application is healthy, and, if not, start auto-healing by, for example, creating a new server, loading it with the saved state, and shutting down the failed server.

Using the new life-cycle services, all the application developers need to do now is merely declare the rules for making such decisions and have the Cloud provider's software perform the necessary actions. Again, the developer's energy can be focused solely on the features of the application itself.

The technology behind this is that the Cloud provider essentially creates generic services, with the appropriate Application Programmer's Interface (API) for each service. What has actually happened is that the common-denominator features present in all applications have been “abstracted”—that is, made available as building blocks. This type of modularization has been the principle of software development, but what could previously be achieved only through rigidly specified procedure calls to a local library is now done in a highly distributed manner, with the building blocks residing on machines other than the application that assembles them.

Figure 2.1 illustrates this with a metaphor that is well known in the industry. Before the Cloud, the actual value-adding application was merely the tip of an iceberg as seen by the end user, while a huge investment still had to be made in the larger, invisible part that was not seen by the user.

Diagram shows the investment in an application before and after deployment. Before the deployment, the actual value-adding application is merely the tip of an iceberg with software, maintenance, personnel, connectivity, hardware, and electricity in the sea. After deployment, application is represented as a bird, and infrastructure as a service is represented as a cloud.

Figure 2.1 Investment in an application deployment—before and after.

An incisive example reflecting the change in this industry is Instagram. Facebook bought Instagram for one billion dollars. At the time of the purchase, Instagram had 11 employees managing 30 million customers. Instagram had no physical infrastructure, and only three individuals were employed to manage the infrastructure within the Amazon Cloud. There was no capital expense required, no physical servers needed to be procured and maintained, no technicians paid to administer them, and so on. This enabled the company to generate one billion dollars in value in two years, with little or no upfront investment in people or infrastructure. Most company expenses went toward customer acquisition and retention. The Cloud allowed Instagram to scale automatically as more users came on board, without the service crashing with growth.

Back to our early example of Zing Interactive Media—if it were launched today it would definitely follow the Instagram example. There would be no need to lease a “cage,” buy a server, rent T1 lines, or go through the other hoops described above. Instead, we would be able to focus only on the interactive radio application. Furthermore, we would not need to hire database administrators since our application could consume a database-as-a-service function. And finally, we would hire fewer developers as building a robust scalable application would be as simple as defining the life cycle management rules in the relevant service of the Cloud provider.

In the case of software development in a corporation, we are seeing two trends: Shadow IT and Private Cloud.

With the Shadow IT trend, in-house developers—facing the alternative of either following the process described above (which did not change much with virtualization) or consuming a Cloud service—often opted to bypass the IT department, take out a credit card, and start developing on a public Cloud. Consider the example of the stress test discussed above—with relatively simple logic, a developer can run this test at very high scale, whenever needed, and pay only for actual use. If scaling up is needed, it requires a simple change, which can be implemented immediately. Revisiting the steps in the old process and its related costs (in both time and capital), it's clear why this approach is taking off.

Many a Chief Information Officer (CIO) has observed this trend and understood that it is not enough just to implement virtualization in their data centers (often called Private Cloud, but really they were not that). The risks of Shadow IT are many, among them the loss of control over personnel. There are also significant security risks, since critical company data are now replicated in the Cloud. The matter of access to critical data (which we will address in detail in the Appendix) is particularly important, as it often concerns privacy and is subject to regulatory and legal constraints. For instance, the US Health Insurance Portability and Accountability Act (HIPAA) 7 has strict privacy rules with which companies must comply. Another important example of the rules guarding data access is the US law known as the Sarbanes–Oxley Act (SOX),8 which sets standards for all US public companies' boards and accounting firms.

These considerations, under the threat of Shadow IT, lead CIOs to take new approaches. One is called Virtual Private Cloud, which is effected by obtaining from a Cloud provider a secure area (a dedicated set of resources). This approach allows a company to enjoy all the benefits of the Cloud, but in a controlled manner, with the company's IT being in full control of the security as well as costs. The service-level agreements and potential liabilities are clearly defined here.

The second approach is to build true private Clouds in the company's own data centers. The technology enabling this approach has evolved sufficiently, and so the vendors have started offering the full capabilities of a Cloud in software products. One example, which we will address in much detail in Chapter 7 and the Appendix, is the open-source project developing Cloud-enabling software, OpenStack. With products like that the enterprise IT departments can advance their own data center implementation, from just supporting virtualization to building a true Cloud, with services similar to those offered by a Cloud provider. These private Clouds provide internal services internally, with most of the benefits of the public Cloud (obviously with limited scale), but under full control and ultimately lower costs, as the margin of the Cloud provider is eliminated.

The trend for technology companies is to start in a public Cloud and then, after reaching the scale-up plateau, move to a true private Cloud to save costs. Most famous for this is Zynga—the gaming company that produced Farmville, among other games. Zynga started out with Amazon, offering its web services. When a game started to take off and its use patterns became predictable, Zynga moved it to the in-house Cloud, called zCloud, and optimized for gaming needs. Similarly, eBay has deployed the OpenStack software on 7000 servers that today power 95% of its marketplace.9

It should now be clear that the benefits of the Cloud are quite significant. But the Cloud has a downside, too.

We have already discussed some of the security challenges above (and, again, we will be addressing security throughout the book). It is easy to fall in love with the simplicity that the Cloud offers, but the security challenges are very real, and, in our opinion, are still under-appreciated.

Another problem is control over hardware choices to meet reliability and performance requirements. Psychologically, it is not easy for developers to relinquish control over the exact specification of the servers they need and choices over which CPU, memory, form factor, and network interface cards are to be used. In fact, it is not only psychological. Whereas before a developer could be assured of meeting specifications, now one should merely trust the Cloud infrastructure to respond properly to an API call to increase computing power. In this situation, it is particularly important to develop and evaluate overarching software models in support of highly reliable and high-performance services.

As we will see later in this book, Cloud providers respond to this by adding capabilities to reserve-specific (yet hardware-generic) configuration parameters—such as number of CPU cores, memory size, storage capacity, and networking “pipes.”

Intel, among other CPU vendors, is contributing to solving these problems. Take, for example, an application that needs a predictable amount of CPU power. Until recently, in the Cloud it could not be assured with fine granularity what an application would receive, which could be a major problem for real-time applications. Intel is providing API that allows the host to guarantee a certain percentage of the CPU to a given virtual machine. This capability, effected by assigning a virtual machine to a given processor or a range of processes—so-called CPU pinning—is exposed via the hypervisor and the Cloud provider's systems, and it can be consumed by the application.

As one uses higher abstraction layers, one gains simplicity, but as one consumes generic services, one's ability to do unique things is very limited. Or otherwise put, if a capability is not exposed through an API, it cannot be used. For example, if one would like to use a specific advanced function of a load balancer of a specific vendor, one is in trouble in a generic Cloud. One can only use the load balancing functions exposed by the Cloud provider's API, and in most cases one would not even know which vendor is powering this service.

The work-around here is to descend the abstraction ladder. With the example of the last paragraph, one can purchase a virtual version of the vendor's load balancer, bring it up as a virtual machine as part of your project, and then use it. In other words, higher abstraction layers might not help to satisfy unique requirements.

2.2 The Business Model Around Cloud

Cloud service providers, such as Google or Amazon, are running huge infrastructures. It is estimated that Google has more than one million physical servers and that Amazon Cloud is providing infrastructure to 1.5–2 million virtual machines. These huge data centers are built using highly commoditized hardware, with very small operational teams (only tens of people in a shift manage all Google's servers) leveraging automation in order to provide new levels of operational efficiencies. Although the infrastructure components themselves are not highly reliable (Amazon is only providing 99.95% SLA), the infrastructure automation and the way applications are written to leverage this infrastructure enable a rather reliable service (e.g., Google search engine or Facebook Wall) for a fraction of the cost that other industries bill for similar services.

Cloud provides a new level of infrastructure efficiencies and business agility, and it achieves that with a new operational model (e.g., automation, self-service, standardized commodity elements) rather than through performance optimization of infrastructure elements. The CapEx investment in hardware is less than 20% of the total cost of ownership of such infrastructures. The rest is mainly operational and licensing cost. The Cloud operational model and software choices (e.g., use of open-source software) enable a dramatic reduction in total cost—not just in the hardware, as is the case with virtualization alone.

Let us take a quick look at the business models offered by Cloud providers and software and service vendors, presented respectively in the subsections that follow.

2.2.1 Cloud Providers

Cloud offers a utility model for its services: computing, storage, application, and operations. This comes with an array of pricing models, which balance an end user's flexibility and price. Higher pricing is offered for the most flexible arrangement—everything on demand with no commitment. Better pricing is offered for reserved capacity—or a guarantee of a certain amount of use in a given time—which allows Cloud providers to plan their capacity better. For example, at the time of writing this chapter, using the Amazon pricing tool on its website we have obtained a quote from AWS for a mid-sized machine at $0.07 per hour for on-demand use. Reserved capacity for the same machine is quoted at $0.026—a 63% discount. This pricing does not include networking, data transfers, or other costs.10

Higher prices are charged for special services, such as the Virtual Private Cloud mentioned earlier. Finally, the best pricing is spot pricing, in which it is the Cloud provider who defines when the sought services are to be offered (that is, at the time when the provider's capacity is expected to be under-utilized). This is an excellent option for off-line computational tasks. For the Cloud providers, it ensures higher utilization.

One interesting trend, led by Amazon AWS, is the constant stream of price reductions. As Amazon adds scale and as storage and other costs go down, Amazon is taking the approach of reducing the pricing continuously—thereby increasing its competitive advantage and making the case, for potential customers, for moving to the Cloud even more attractive. In addition, Amazon continuously adds innovative services, such as the higher application abstraction mentioned above, which, of course, come with new charges. Additional charges are also made for networking, configuration changes, special machine types, and so forth.

For those who are interested in the business aspects of the Cloud, we highly recommend Joe Weinman's book [1], which also comes with a useful and incisive website11 offering, among many other things, a set of simulation tools to deal with structure, dynamics, and financial analysis of utility and Cloud Computing. We also recommend another treatise on Cloud business by Dr. Timothy Chou [2], which focuses on software business models.

2.2.2 Software and Service Vendors

To build a Private Cloud, a CIO organization needs to create a data center with physical servers, storage, and so on.12 Then, in order to turn that into a Cloud, it has the choice of either purchasing the infrastructure software from a proprietary vendor (such as VMware) or using open-source software. OpenStack, addressed further in Chapter 7, is an open-source project that allows its users to build a Cloud service that offers services similar to Amazon AWS.

Even though the software from open-source projects is free for the taking, in practice—when it comes to large open-source projects—it is hard to avoid costs associated with the maintenance. Thus, most companies prefer not to take software directly from open-source repositories, instead purchasing it from a vendor who offers support and maintenance (upgrades, bug fixes, etc.). Companies like Red Hat and Canonical lead this segment. Pricing for these systems is usually based on the number of CPU sockets used in the Cloud cluster. Typically, the fee is annual and does not depend on the actual use metrics.

In addition, most companies use a professional services firm to help them set up (and often also manage) their Cloud environments. This is usually priced on a per-project time and material basis.

2.3 Taking Cloud to the Network Operators

At the cutting edge of the evolution to Cloud is the transformation of the telecommunications infrastructure. As we mentioned earlier, the telecommunications providers—who are also typically regulated in their respective countries—offer by far the most reliable and secure real-time services. Over more than 100 years, telecommunications equipment has evolved from electro-mechanical cross-connect telephone switches to highly specialized digital switches, to data switches—that make the present telecommunications networks. Further, these “boxes” have been interconnected with specialized networking appliances13 and general-purpose high-performance computers that run operations and management software.

The Network Functions Virtualization (NFV) movement is about radically transforming the “hardware-box-based” telecom world along Cloud principles.14

First, let us address the problem that the network operators wanted to solve. While most of what we know as “network function” today is provided by software, this software runs on dedicated “telecom-grade” hardware. “Telecom grade” means that the hardware is (1) specifically engineered for running in telecommunications networks, (2) designed to live in the network for over 15 years, and (3) functional 99.999% (the “five nines”) of the time (i.e., with about 5 minutes of downtime per year). This comes with a high cost of installation and maintenance of customized equipment. Especially when taking into account Moore's “law,” according to which the computing power doubles every 18 months, one can easily imagine the problems that accompany a 15-year-long commitment to dedicated hardware equipment.

With increased competition, network providers have been trying to find a solution to reducing margins and growing competition. And that competition now comes not only from within the telecom industry, but also from web-based service providers, known as Over-The-Top (OTT).

Solving this problem requires a new operational model that reduces costs and speeds up the introduction of new services for revenue growth.

To tackle this, seven of the world's leading telecom network operators joined together to create a set of standards that were to become the framework for the advancement of virtualizing network services. On October 12, 2012, the representatives of 13 network operators15 worldwide published a White Paper16 outlining the benefits and challenges of doing so and issuing a call for action.

Soon after that, 52 other network operators—along with telecom equipment, IT vendors, and technology consultants—formed the ETSI NFV Industry Specifications Group (ISG).17

The areas where action was needed can be summarized as follows. First, operational improvements. Running a network comprising the equipment from multiple vendors is far too complex and requires too much overhead (compared with a Cloud operator, a telecom network operator has to deal with the number of spare parts—which is an order of magnitude higher).

Second, cost reductions. Managing and maintaining the infrastructure using automation would require a tenth of the people presently involved in “manual” operations. With that, the number of “hardware boxes” in a telecom network is about 10,000(!) larger than that in the Cloud operator.

Third, streamlining high-touch processes. Provisioning and scaling services presently require manual intervention, and it takes 9 to 18 months to scale an existing service, whereas Cloud promises instant scaling.

Fourth, reduction of development time. Introducing new services takes 16 to 25 months. Compare this to several weeks in the IT industry and to immediate service instantiation in the Cloud.

Fifth, reduction of replacement costs. The respective lifespans of services keep shortening, and so does the need to replace the software along with the hardware, which is where the sixth—and last—area comes in.

Sixth, reduction of equipment costs. (The hint lies in comparing the price of the proprietary vendor-specific hardware with that of the commodity off-the-shelf x86-based servers.

To deal with the above problem areas, tried-and-true virtualization and Cloud principles have been called for. To this end, the NFV is about integrating into the telecom space many of the same Cloud principles discussed earlier. It is about first virtualizing the network functions pertinent to routing, voice communications, content distribution, and so on and then running them on a high-scale, highly efficient Cloud platform.

The NFV space can be divided into two parts: the NFV platform and the network functions running on top of it. The idea is that the network functions run on a common shared platform (the NFV platform), which is embedded in the network. Naturally, the network is what makes a major difference between a generic Cloud and the NFV, as the raison d'être of the latter is delivering network-based services.

The NFV is about replacing physical deployment with virtual, the network functions deployed dynamically, on demand across the network on Common Off-The-Shelf (COTS) hardware. The NFV platform automates the installation and operation of Cloud nodes, orchestrates mass scale-distributed data centers, manages and automates application life cycles, and leverages the network. Needless to say, the platform is open to all vendors.

To appreciate the dynamic aspect of the NFV, consider the Content Delivery Networking (CDN) services (all aspects of which are thoroughly discussed in the dedicated monograph [3], which we highly recommend). In a nutshell, when a content provider (say a movie-streaming site) needs to deliver a real-time service over the Internet, the bandwidth costs (and congestion) are an obstacle. A working solution is to replicate the content on a number of servers that are placed, for a fee, around various geographic locations in an operator's network to meet the demand of local users. At the moment, this means deploying and administering physical servers, which comes with the problems discussed earlier. One problem is that the demand is often based on the time of day. As the time for viewing movies on the east coast of the United States is different from that in Japan, the respective servers would be alternately under-utilized for large periods of time. The ability to deploy a CDN server dynamically to data centers near the users that demand the service is an obvious boon, which not only saves costs, but also offers unprecedented flexibility to both the content provider and the operator.

Similar, although more specialized, examples of telecommunications applications that immediately benefit from NFV are the IP Multimedia Subsystem (IMS) for the Third Generation (3G) [4] and the Evolved Packet Core (EPC) for the Fourth Generation (4G) broadband wireless services [5]. (As a simple example: consider the flexibility of deploying—among the involved network providers—those network functions18 that support roaming).

Network providers consider the NFV both disruptive and challenging. The same goes for many of the network vendors in this space.

The founding principles for developing the NFV solution are as follows:

  • The NFV Cloud is distributed across the operator's network, and it can be constructed from elements that are designed for zero-touch, automated, large-scale deployment in central offices19 and data centers.
  • The NFV Cloud leverages and integrates with the networking services in order to deliver a full end-to-end guarantee for the service.
  • The NFV Cloud is open in that it must be able to facilitate different applications coming from different vendors and using varying technologies.
  • The NFV Cloud enables a new operational model by automating and unifying the many services that service providers might have, such as the distributed Cloud location and the application life cycle (further described in Chapter 7). The NFV Cloud must provide a high degree of security. (On this subject, please see the White Paper published by TMCnet, which outlines the authors' vision on this subject.20)

No doubt, this latest frontier shows us that the Cloud is now mature enough to change even more traditional industries—such as the energy sector. In coming years, we will see the fundamental effect of the Cloud on these industries' financial results and competitiveness.

Notes

References

  1. Weinman, J. (2012) The Business Value of Cloud Computing. John Wiley & Sons, Inc, New York.
  2. Chou, T. (2010) Cloud: Seven Clear Business Models, 2nd edn. Active Book Press, Madison, WI.
  3. Hofmann, M. and Beaumont, L.R. (2005) Content Networking: Architecture, Protocols, and Practice (part of the Morgan Kaufmann Series in Networking). Morgan Kaufmann/Elsevier, Amsterdam.
  4. Camarillo, G. and García-Martín, M.-A. (2008) The 3G IP Multimedia Subsystem (IMS): Merging the Internet and the Cellular Worlds, 3rd edn. John Wiley & Sons, Inc, New York.
  5. Olsson, M., Sultana, S., Rommer, S., et al. (2012) EPC and 4G Packet Networks: Driving the Mobile Broadband Revolution, 2nd edn. Academic Press/Elsevier, Amsterdam.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset