Key topics in this lesson:
Understanding the transformation from traditional IT
The evolution of cloud computing
Definitions and characteristics of cloud computing
Example cloud service architectures
Analysis and comparison of cloud deployment models
Planning and architecture best practices
Before undertaking any transition to cloud computing, it is important to understand its basic fundamentals and how information technology (IT) has evolved up to this point. When cloud computing was just beginning, some of the terms and models were unproven concepts, promising limitless benefits—now, we’ve had the benefit of time and experience to update those concepts into real-world systems designs, deployment models, and best practices.
IT is clearly one of the fastest and continuously evolving industries in the world. We all know that the processing power of computers multiplies every few years, storage technology doubles every couple of years, and software applications continuously evolve—usually for the better. Just as the IT industry evolves in general, so too must the IT departments within small, large, commercial, and government organizations.
In the 1960s and 1970s, only large organizations and universities could afford—or more accurately, needed—an IT infrastructure, which often comprised a centralized mainframe computer and remote terminals for access to information and computing resources. Later, organizations began utilizing smaller centralized minicomputers, and then moved toward a system of microcomputers with much larger and distributed processing power to access and manage information everywhere. As you can imagine, this had a significant impact on IT departments.
As we entered the Internet era, outsourcing and staff augmentation exploded so that IT departments could keep up with new technologies and find enough skilled personnel. Today, the IT departments in many organizations are larger than the core business functions the company actually performs or sells to their customers.
With such large and complex internal IT departments, hired consultants and IT outsourcing and augmentation, as well as the actual expense of computer assets (hardware, software, etc.), companies are wondering if they really get a solid return on investment (ROI). Worldwide economic declines are also making organizations reevaluate their business and financial models. One thing that has become clear is that unless an organization is actually in the business of providing IT services, it should focus on its core mission and customers, not on a large internal IT departments or datacenters. Taking this into consideration and comparing the benefits of the cloud, it becomes evident that transitioning to cloud computing can offer both cost savings and corporate IT right-sizing.
Cloud sales and marketing campaigns often use the term revolutionary shift to describe the advancements that cloud computing brings to organizations. The claim is that cloud computing is the most significant change in the industry in more than 10 years. I disagree. This is not a revolutionary shift; rather, it’s evolutionary. It is an evolution of information technology enabling a new style of IT services at a faster pace than in the past.
Looking at the IT industry from a broader perspective, the adoption and proliferation of the Internet was a true paradigm shift, and in the more than 15 years since everyone began using it regularly, there has been a steady progression of Internet or web-based technology advancement. We went from static web pages to dynamic content, and then on to hosted applications accessed via the Internet. Then, we expanded the size and domain of traditional IT services to make further use of the Internet and wide area networks (WANs), hosting more servers and applications at third-party Internet service providers (ISPs) and application service providers (ASPs)—essentially the precursor to what we now call public cloud providers.
Cloud computing, as discussed later in this lesson, takes technology and IT concepts from the past and transforms them into a faster delivery model, providing new IT services and business value to customers at a pace we’ve never before seen. Cloud computing is also somewhat unique in that business value and a focus on the end consumer is now at the forefront of planning and execution. Cloud computing, in general, is not a traditional IT department or service that is often considered a cost factor but is now an accelerator of business innovation, efficiency, and service to customers.
We have all heard the phrase “cheaper, better, and faster.” I would easily confirm that cloud computing provides better IT services at a faster development and launch pace; however, there is some debate regarding the “cheaper” part. Although automation, virtualization, and elastic services provide clear cost benefits, the effort and cost to initially deploy the necessary cloud systems does not necessarily provide an immediate cost savings.
Figure 1-1 illustrates how cloud computing is really just an evolution, not necessarily a paradigm shift, contrary to some of the industry marketing hype. Note how application platforms have matured (below the line) versus the computing technology (above the line) as the industry evolved into this cloud computing era.
It is important to understand how we arrived at this cloud-centric point in the information technology industry. I won’t spend too much time reminiscing about the past, but there is value in understanding the origins of cloud computing. History does tend to repeat itself, and this applies as much to the computer industry as anything else. So, let me take a moment to explain how historical trends put us on this path, how we began using many of these technologies 30 years ago, and how historic IT principles are still valuable today.
Although it did not go by the name “cloud,” the concept of cloud computing was predicted in 1961 by Professor John McCarthy of the Massachusetts Institute of technology when he stated:
Computing may someday be organized as a public utility just as the telephone system is a public utility ... Each subscriber needs to pay only for the capacity he actually uses, but he has access to all programming languages characteristic of a very large system ... Certain subscribers might offer service to other subscribers ... The computer utility could become the basis of a new and important industry.1
In the early days of computer technology, the mainframe computer was a large centralized computing platform with remote dumb terminals used by end users. These terminals could be compared to thin-client devices in today’s industry, with the mainframe being the centralized cloud computing platform. This centralized mainframe held all of the computing power (the processing cores), memory, and connected storage, managed by a small operations staff for shared use by a massive number of users.
Sounds a little like cloud computing, doesn’t it?
There are further similarities when comparing mainframe-computing environments to today’s cloud. Although the mainframe was physically very large, it wasn’t all that powerful by modern standards. What the mainframe excelled at was throughput of input/output (I/O) processing—its ability to move data through the system. Ideally, the mainframe systems were managed by a centralized IT staff to maintain security, account management, backup and recovery, system upgrades, and customer support, all of which are components of today’s modern datacenters and cloud systems.
Virtualization is another concept that existed more than 30 years ago. Indeed, it was heavily utilized in mainframe computing. Multiple customers and users shared the overall system, but used virtualized segments of the overall operating system, called virtual machines (VMs). This is almost exactly what is done in today’s modern cloud computing environments.
Virtualization and VMs are not unique to cloud computing; these technologies have existed for more than 30 years.
The basic concepts of cloud computing have been in the IT industry all along; dust off an old mainframe concepts book and you will be surprised by the similarities. Now that we have personal computers and servers with huge amounts of memory, processing power, and storage, virtualization is now even more economical and efficient, and it harnesses excess computing power that otherwise went underutilized. As we move into the next generation of cloud environments, virtualization of servers is commonplace, with the new focus being on virtualizing networks, storage, and the entire datacenter in what is called a software-defined datacenter.
Starting in the late 1980s and into the year 2000, the industry began a huge shift from centralized computing to distributed computing. These small distributed servers held more memory, processors, and storage than most mainframes, but the internal server I/O and network were now a challenge. After 20 years of deploying countless new, smaller servers across thousands of datacenters, computer resources (CPU, memory, storage, networking) and management (security, operations, backup/recovery) are now spread out across organizations, and sometimes even across multiple contractors or providers. Many business models have actually shown an increase in the cost of managing the entire systems lifecycle. At least the cost of compute power is a fraction of what it once was due to ever-increasing performance and ever-decreasing prices.
Today’s computing environments are highly distributed, but consolidation of server farms and datacenters is in full swing. This consolidation involves deploying higher capacity servers and IT assets into small physical datacenters—providing equal or more computing capability while using smaller, more powerful computers to eliminate inefficient legacy systems. This consolidation will eventually bring down operational and management costs, accomplishing more with fewer IT assets, facilities, and personnel, which are some of the costliest assets.
Consider what to do next with distributed computing platforms. Mobile devices (notebooks, tablets, and smartphones) already outsell desktop workstations throughout the world. Servers are being consolidated at an increasing pace and achieving densities within datacenters never before thought possible. In fact, modern datacenters are packing so many servers into each rack that, often, power and HVAC are the limiting scalability factors rather than physical space.
With smaller and less powerful (compared to a full desktop workstation) end-user devices, we are headed back to a model wherein the compute power is held more in the datacenters than at the edge/user device. This is especially true for thin-client devices and virtual desktop interface (VDI, or in “cloud speak,” Workplace or Desktop as a Service). Not that every end-user device will become “dumb” or “thin client,” but there is clearly a mix of users who need varying levels of compute power on their edge device.
In a relatively short period of time, we have gone from centralized compute processing with thin end-user/edge devices to a highly distributed compute environment, and now we’re headed back toward centralization to a certain degree—this time using clouds and consolidating legacy datacenter IT assets. History is repeating itself. Let’s hope we are making some intelligent decisions and doing it better this time. Some could argue that mainframes still play a large role in today’s IT industry, and that they were “the best” business model all along.
As we consolidate many of the distributed computing platforms, datacenters, and the occasional retiring of a mainframe system, it is important to realize where we are headed and why.
As I look at today’s cloud computing environment and our immediate future, we are shifting back to virtualization and multitenancy concepts that were founded in the early days of centralized (mainframe) computing. Though these might be long-standing concepts in the IT industry, cloud computing is pushing ever upward to new heights in the areas of automation, elasticity, on-demand ordering, pay-as-you-go pricing, and self-service management and control systems.
Organizations now understand that they might not benefit from large and costly IT departments that do not directly contribute to your customers and your core mission. Outsourcing IT functions and personnel is nothing new, but cloud computing represents a new form of outsourcing, scalability, and cost control, if managed wisely. With cloud computing, the burden of building, maintaining, upgrading, and operating the compute systems is the responsibility of the provider. This gives the consuming organization ultimate flexibility and choice of providers and eliminates being locked into a single one. This results in faster deployment of services at a lower cost so that the consuming organization can focus on its core business functions and customers, not on an IT department. This is the evolution or new style of IT service delivery that has taken 30 years to achieve.
Cloud computing results in faster deployment of services at a lower cost. This means that the consuming organization can focus on its core business functions and customers, not on its IT department.
So how are chief information officers (CIOs) transforming and benefiting from cloud computing? There is clearly a reduction in the use of traditional “managed services” and generic “time and materials” IT contractors providing computer services. Cloud consumers both small and large are able to select the cloud provider, pay for the services utilized, and scale up or scale down if finances or priorities of the business change. Organizations are no longer stuck with unneeded computer systems, server farms, and datacenters, which leads to greater agility in their overall business decisions.
Here are some of the recent trends and updated benefits CIOs can take advantage of by shifting to cloud services:
Managed service contracts transitioning to cloud service providers with more scalability (up and down) and less risk to consuming organization.
Ability to slowly shift key applications and traditional IT to the cloud—moving to the cloud does not need to be an all or nothing transition.
Increased choice and flexibility for the consuming organization by avoiding lock-in to a single provider by using a hybrid cloud deployment model or cloud service brokering.
Organizations pay for cloud usage, which is carefully monitored and measured. In previous managed services models, it was often difficult to see actual results based on IT costs.
Centralized and efficiently utilized compute resources managed by fewer personnel with heavy use of automation and consistent processes resulting in lower cost and better quality to the consumers.
Lifecycle management, upgrade, and replacement of used resources are the responsibility of the cloud provider, resulting in reduced cost, labor, time, and risk for individual IT organizations performing this task in a traditional IT environment.
Consuming organizations do not need a large number of experienced senior IT personnel, who are expensive, difficult to find, and challenging to keep. The technical staff will be able to better focus on their mission-critical applications of their businesses rather than managing commodity IT.
There are also some challenges that CIOs and business executives need to consider when moving to a cloud service:
Organizations have significant legacy computing resources (servers, datacenters, and IT personnel) that will need to be transitioned or eliminated in order to achieve the true cost savings and flexibility provided by cloud providers and services. Often these existing computing resources have not yet been fully depreciated, making the adoption of cloud computing challenging to procure. Some organizations do not necessarily see an immediate savings because of the cloud.
Migrating large mission-critical applications to the cloud can be complicated and somewhat expensive (unlike commodity IT services, which are much easier and less costly to transition). Businesses should evaluate whether their custom and legacy applications are worth the reinvestment, or if an alternative cloud-enabled service exists which might be a better fit in the long term.
Private cloud deployments do not always have sufficient redundancy in geographically diverse hosting facilities. Using multiple datacenter facilities and/or multiple cloud providers can provide improved service availability and continuity of operations.
Procurement and budgeting for cloud services is a challenge to some commercial and government organizations. Existing procurement policies might need to be adapted.
Existing security, operations, and other processes within consuming organizations need to adapt to this new cloud computing model, in which services, applications, and VMs are launched through automation.
The first thing to clarify is the use of the term “cloud computing” in general. Throughout this lesson, I refer to cloud computing as “cloud services”; this is actually a more accurate term. Cloud computing, although the accepted industry nomenclature, originated from the concept of hosting computer (processor, memory, storage) resources in the cloud; hence, the term cloud computing.
Though it is still a relatively new term, cloud computing has already grown in scope and meaning to now encompass applications, virtual desktops, automated deployment, service orchestration, and more—almost anything related to IT that an organization would want hosted and serviced through the cloud. The term used in the industry is as-a-Service which is the “aaS” portion of a number of acronyms that have become ubiquitous in recent years, such as XaaS. This particular acronym refers to any cloud-based application or service provided to consumers. The most common models include Infrastructure as a Servce (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS), all of which you’ll see later in this lesson.
The National Institute for Standards and Technology (NIST) definition of the cloud states the following:
Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.
Although there are several companies and individuals claiming credit for first using the term “cloud” as pertaining to cloud computing, the real-world meaning of cloud is not truly a rigid definition. Many consider the cloud just another term for the Internet, and, depending on how the term is used, that might be correct; cloud computing frequently includes providing computing resources (processors, memory, storage) over the Internet. The key to remember, however, is that cloud computing doesn’t technically require the Internet: you can utilize private communications and network circuits between facilities and essentially form your own private cloud. In many situations, a combination of private WANs, communications circuits, and the Internet is what is actually used for cloud computing services. I fully define and differentiate between public, private, hybrid, and all the cloud computing models later in this lesson.
Although they began as a pure on-demand compute and storage environment serviced through the Internet, cloud services quickly expanded to include various networking, backup and recovery, platform, application, and hosted data services. There are five key characteristics of cloud services as defined by NIST (note that I have updated the descriptions of each characteristic from NIST’s original publication):
Even if an organization does not migrate or adopt a true cloud service, the benefits of the cloud are still desirable for on-premises, enterprise customer-owned datacenters and IT departments .
As organizations move critical compute, storage, and application systems to cloud providers, several additional attributes or characteristics have become more of an emphasis based on recent lessons learned. Many of the characteristics in the following list apply to IT modernization trends in general even if an organization isn’t yet shifting to a cloud environment:
Figure 1-2 shows the NIST visual model of cloud computing. Notice how a shared pool of resources is included among the essential characteristics at the top of the diagram. The middle layer represents the cloud service models followed by the cloud deployment types at the bottom.
Cloud deployment models proved to be an area of confusion during the initial years of the cloud-computing industry. Table 1-1 provides a summary definition of each cloud deployment model. Although NIST if often referred to for definitions of cloud and cloud models, Table 1-1 represents a more modern breakdown of cloud deployment models.
Figure 1-3 depicts the relationship of the enterprise (customer) network infrastructure and private (on-premises or off-premises) cloud options. When connected to one or more types of cloud providers, a hybrid cloud is formed. There can be multiple private-or public cloud providers interconnected. Many public cloud providers offer VPC and various other as a service offerings (e.g., IaaS, PaaS, and SaaS) from their public cloud infrastructure.
Each cloud deployment model—public, private, VPC, community, and hybrid—offers distinct advantages and disadvantages. It depends upon the customer requirements to determine which model or combination of models is truly the best for a given customer. Understanding these cloud deployment models is essential to begin planning your cloud transition.
Table 1-2 provided a brief definition of each cloud deployment model. Now, I will focus on the unique characteristics of each one as compared to the others. Ultimately, a full assessment of an organization’s requirements is needed in order to pick the best solution.
Although experience and industry trends show that customers have a preference for the economics provided by public clouds, it is private clouds that offer more flexibility with customized features and security. The larger the organization, particularly government entities, the more likely a private cloud will be deployed—conversely, small and medium-sized businesses often cannot afford to purchase or build their own private clouds. Many small businesses also have the advantage of little or no existing investment in infrastructure, so they can more quickly adopt cloud-hosted applications when first forming the organization.
A public cloud service is based on a cloud provider typically offering preconfigured and published offerings. They normally have an online storefront that lists all available products, configurations, options, and pricing. Because the public cloud providers are offering services to the general public and a wide variety of customers, they have implemented their own cloud management platform. The cloud platform and services offered are targeted at the widest group of potential consumers; therefore, customization of the service is normally limited.
The public cloud provider owns, manages, and operates all computing resources located within the provider facilities, and resources available to users are shared across all customers. Customization of a public IaaS application is usually limited to selecting options from a service catalog. Common options include choice of the operating system (OS), the OS version, and the sizing of the VM (processors and storage). Cloud providers often prebundle IaaS VM services into small, medium, large, and extra-large configurations, each with predefined processor, memory, and storage sizes. Customizations to ordering, billing, reporting, or networking might not be accommodated; this is a situation for which a private cloud deployment is more suitable.
Public cloud providers have also entered the virtual private, community, and even private cloud service market—providing more data segregation and customization for each customer rather than the legacy pure public cloud models. Some public cloud service providers are beginning to blur the lines between public, private, and hybrid cloud through integration back to on-premises enterprise datacenter-based identity, authentication, application publishing, and other services.
A very recent industry trend is public cloud providers launching new hybrid services. These hybrid services focus on integrating traditional enterprise datacenters, typically on a customer’s premises, with public cloud services. This hybrid capability makes it possible for customers to federate authentication systems, synchronize data, support both enterprise and cloud applications, and failover enterprise servers to public cloud VMs.
Some cloud providers offer higher government-level security upgrades, which might use physically separated resources deployed in a segmented compartment within the provider’s datacenters. A public cloud provider that dedicates infrastructure and services to one customer is essentially offering a VPC, but it might market this under the term “community cloud” or a brand name such as “Federal” or “Gov” cloud. In some cases, a cloud provider might offer completely isolated and dedicated network infrastructures for each of its customers purchasing the government-compliant high-security cloud option; however, technically these dedicated cloud infrastructures would be more accurately defined as private or managed private clouds hosted within the provider’s facility.
Private cloud services might begin with the same basic cloud computing services as offered by a public cloud provider, but the service can be hosted at a customer-owned or contracted datacenter. Private clouds offer choices of the cloud services to be deployed, how much integration there is between services, how the management and operations are handled, and the level of security controls and accreditation.
Private cloud is an excellent model for large organizations that have significant existing datacenter and server farm assets, and want to slowly modernize to cloud technologies and processes. The organization can deploy a private cloud within the same datacenter(s) with a longer-term plan to migrate legacy IT systems to the cloud model over time. The customer can then transition applications and data at the discretion of its staff, augmented by IT cloud service integrator or other expertise, as needed.
As I state throughout this lesson, as soon as you connect a private cloud to another type of cloud (e.g., public), by definition, you now have a hybrid cloud. In addition, if we are going by strict definitions, if you connect existing traditional datacenters, server farms, or applications to the private cloud, you also have formed a hybrid cloud. For this reason, I believe almost all clouds are or will become hybrids and the terms “hybrid,” “private,” and “public” will disappear over time.
Almost all enterprise clouds will become hybrids—using a combination of on-premises IT, private, and public compute and application resources.
Arguably, the first public cloud service provider to achieve wide acceptance and scale was Amazon Web Services (AWS). This is an example of a true public cloud compute offering, with all the key characteristics and benefits of cloud services. Many other providers have built or are building their own public cloud offerings to provide similar capabilities. The key benefits that organizations achieve from using a public cloud are not being questioned here, but there seems to be a misconception about private cloud computing when organizations evaluate and select a provider or deployment model.
Most small and medium-sized businesses (also referred to as SMBs) do not have many choices in selecting their cloud deployment model due to their size, limited budget, internal technical expertise, and needs. Often a public cloud service offering is adequate and cost effective compared to purchasing or deploying a private cloud. For larger organizations that have size, complexity, and unique requirements, a private cloud service is often more suitable. Of course, a private cloud involves deploying the cloud services either within an on-premises datacenter, or hiring a vendor to configure a dedicated private cloud for the organization. This usually costs more money to deploy, but has significant advantages, the most important of which is the ability to customize the private cloud service to meet the organization’s security, operational, financial, and other unique requirements—something a public cloud service cannot offer and SMBs often cannot afford.
In my experience, most customers—larger organizations and government entities, in particular—desire the flexibility and scalability of public cloud offerings. Unfortunately, their unique requirements almost always force a private cloud to be considered in the end. These unique requirements, difficult to accomplish using public cloud, include customizations in the procurement, security, operational, reporting, and governance processes. Only private cloud deployments have the ability to highly customize the cloud service to meet customer requirements. Typically, the larger and more complex the customer, the larger and more complex its list of unique requirements will be. For this reason, it is important to discuss early in the planning process all requirements and their level of priority.
As new cloud customers see the potential uses and features, there is a tendency to ask for extensive customizations. Private clouds allow for more customization, but is the cost really worth it to manage one-off unique cloud platform configurations in the long run?
Table 1-2 compares private and public cloud capabilities. It does not include community, virtual private, or hybrid, because these are really just variations of private and public.
Capability | Private cloud | Public cloud |
---|---|---|
Service catalog | Customized to customer needs | Established by provider |
Billing and reporting | Ability to integrate with corporate billing systems | Preestablished billing and reporting; no integration with corporate billing systems |
Service-level agreements (SLAs) | Often customized per customer requirement | Established by provider |
Granular resource metering | Granular metering of resources | Established by provider |
Infrastructure servers | VMs and physical servers | Normally only VMs offered |
Security | Customized enterprise-class security | High but standardized security; rarely customizable per customer |
Service offerings | Customized to customer needs | Established by provider |
Self-service control panel | Customized to customer needs | Limited |
Operations and management | Performed by provider, customer, third party, or a combination | Performed by provider |
Security management, monitoring, and accreditation | Performed by provider, customer, third party, or a combination | Performed by provider |
Elasticity and scalability | Unlimited, based within the limits or size of compute resources | Unlimited, service level guaranteed |
Time to provision | After initial setup, minutes to hours | Minutes |
Support | Dedicated account support | Optional support from provider or reseller/channel |
Professional services | Transition, migration, support, and implementation services | Limited customization services; migration and other services available from provider or reseller/channel |
Management services | Full application, database, and platform management services | Limited application and database management services |
A hybrid cloud uses multiple cloud services—any combination of public, private, community, and traditional IT (enterprise) datacenters. A theme throughout this lesson is the trend for private clouds to be a baseline for many organizations and eventually extended services to one or more public cloud XaaS offerings to form a hybrid cloud. Technically, when you connect one cloud to another cloud, or you connect to legacy datacenters and applications, you then have a hybrid cloud. Industry and early cloud adopters have learned that it is wise to implement a cloud management system with embedded hybrid capabilities to integrate multiple cloud providers and legacy customer IT assets. The cloud management system is the centralized ordering, automation, and reporting engine that integrates each cloud service, integrated module, or application.
As customers push the limits of what a public cloud is able to offer, or implement a private cloud, the immediate needs often fit within the combined features of both. In the real world, even the newest private cloud customers just starting out can already see potential uses for a hybrid cloud; they just aren’t ready for it yet. Although public and private clouds are the dominant models deployed today, expect to see hybrid clouds become the norm. Hybrid clouds will become so commonplace across most organizations and datacenters that the terms private and hybrid cloud might disappear in the future.
Many hybrid clouds begin as a private cloud that later extends integration to use one or more public cloud XaaS offerings. There is also a new emerging trend for public cloud providers to do the reverse—using the public cloud platform to integrate back into legacy enterprise datacenters and private clouds. The concepts are the same but the lines between private, public, and legacy datacenters continue to blur as hybrid clouds evolve.
Motivations to implement a hybrid cloud are numerous; primarily, customer organizations might fit within one cloud model (public, private, or community) initially, but future needs to extend their cloud, service, integration, or data sharing with third parties force expansion into a hybrid cloud deployment. Rather than individual management and operations of multiple cloud providers, it is preferable to use a single cloud management system to manage or broker between cloud providers, retaining only one platform to manage all financial, ordering, procurement, automation, workflow, security, governance, and operations in your organization.
After a hybrid cloud service is deployed, the ability to take advantage of the best of breed software applications and XaaS cloud providers is increased, but management of the overall cloud solution is still crucial. Although a customer can purchase cloud services from multiple cloud providers—one hosting a public cloud service, another a private one—purchasing multiple services from different cloud providers requires managing each cloud provider separately. You would use each cloud provider’s management portal for ordering, billing, reporting, and so on—multiplied by the total number of cloud providers to which you have subscribed. A hybrid cloud management solution is unique in that all cloud services across any number of cloud providers are all managed through a single management portal. All ordering, billing, reporting, and cloud operations are managed through the centralized hybrid cloud management platform. The level of development and multiprovider integration to create a unified hybrid or cloud broker platform is significant, and it is highly recommended that no individual customer try to develop a system internally.
A community cloud service provides for a community of users or organizations with shared interests and concerns. Each member organization in a community cloud can host some portion, or application(s), that all departments of the organization can use. Some departments might have the same offering, which can be pooled together for capacity, load balancing, or redundancy reasons. A community cloud can create cooperation between organizations while reducing costs by sharing the infrastructure, operations, and governance.
Organizations utilizing this cloud service would ideally have missions, governance, security requirements, and policies. Cloud services can be hosted on premises at the consumer organization, at peer organization facilities, at a provider, or a combination that allows sharing of the costs and ongoing management. Trends over the past few years indicate limited adoption of this community model—largely because the deployments require an extensive and deep long-term relationship between multiple organizations in order to build, govern, and operate them—this has driven some organizations to consider VPC as an alternative model.
Some cloud providers offer a specialized, community cloud offering. Community cloud is often used as a marketing term to explain a targeted group of customers, such as government public sector organizations, although the actual cloud is technically a VPC, private, or hybrid cloud model.
The primary concern with a community cloud is how the cloud is managed. Standards of communications, cloud management systems, and the services offered need to be agreed upon and upheld across multiple departments or organizations—not just initially, but for many years. This is where business challenges begin to reveal themselves. What happens if, in the future, one of the community cloud departments or organizations changes their business, budget, security standards or other priorities? What if that department was offering critical resources to the community cloud that will no longer be available at the same level as originally agreed upon?
The critical factors to consider in a community cloud are less technical; instead, the focus is on business process, stability, and cooperation-based considerations. Let’s take a look at some of these operational and business challenges:
There are several widely accepted “as a service” models in the industry (and as defined by NIST). Each service model is briefly defined in Table 1-3, and then I detail each service model with real-world examples, architectures, trends, and lessons learned.
As the cloud computing industry and customer adoption has progressed, additional “as a service” (what I call XaaS) models have been coined. Most of these new XaaS models actually fit within one of the three aforementioned core definitions. Many cloud service providers started out with clearly defined cloud services such as IaaS. Today, the service lines have blurred with public cloud providers and enterprise private cloud owners now deploying numerous platform and software services.
Although I would like to provide you with example cloud service providers for each of the services listed in Table 1-4, there are just too many providers, with a good number of them regularly adding more of these services to their portfolio. This has and will continue to blur the line between public cloud providers because most will claim to provide many of these services; however, the quality and completeness of the services will vary widely. Finally, there are many cloud providers that have decided to offer and focus on only one cloud service, such as Salesforce.com, which offers CRM. Compare this to public cloud providers Microsoft and Google, which initially launched email-centric cloud services (Hotmail and Gmail, respectively) but have significantly expanded to numerous IaaS, PaaS, and SaaS offerings.
Infrastructure services are the most common offering for public cloud providers. Staging IaaS in a private cloud requires a certain amount of initial investment but is often the starting point of the private cloud—adding SaaS and PaaS applications after the basic IaaS compute and storage services are in place. A basic IaaS offering provides VMs with either specific fixed or dynamic and VM sizing options.2 Cloud providers might offer multiple VM sizes and OSs at fixed prices per hour, day, week, or month. The cost per VM rises as the amount of processor, memory, and storage increases. Note that one provider’s definition of a processor unit might not be the same as another provider in terms of performance or speed. In a dynamic resource pricing model, the customer is charged a fee per unit of processor, memory, and storage which are more configurable and can afford more scalability than fixed-price IaaS offerings.
Public cloud providers often preconfigure specific offerings, such as VMs; however, the exact processor, memory, and disk space allocated to each VM might not be the same as other providers, so price comparison is not always easy. In fact, some public cloud providers have seemingly intentionally confused their VM configurations, VM size pricing, discount levels, transactions fees, and other metrics to obfuscate their true real-world costs to consumers. The key features you should be looking for are fixed or dynamic sizing, costs for expanding or increasing resources, and the ability to control your VMs through a web-based control panel. High-quality service providers will have an extensive self-service control panel that puts the consumer in control of the VMs, with the ability to reboot, resize, and potentially take a snapshot and restore them. Also, be sure to examine the SLA and any guarantees of system availability.
When it comes to flexibility of options, your public IaaS provider should give you the ability to select your preferred OS, and possibly several versions of each OS to suit your needs. The agreement should clearly specify if backup and restore services are included, or if there are additional charges for those. An advanced feature that might be available is the ability to define your own subnetworks, load balancers, and firewall services.
Although public cloud providers offer a menu of fixed and variable-priced IaaS options, deploying your own private cloud will provide more customization, procurement, and security features unique to your organization. Table 1-5 presents a comparison of features for a typical public cloud service versus what is typical of a private cloud. It is important to note that public cloud providers are constantly enhancing their offerings and self-service capabilities, so the differentiation between public and private clouds pointed out in Table 1-5 does not represent every situation and cloud provider.
Feature | Public | Private |
---|---|---|
VM-based server | ||
Selection of OS | ||
Choice of VM size (CPU, RAM), storage/disk | ||
Ability to dynamically expand resources (CPU, RAM, disk) as needed | ||
Ability to configure load balancing, firewalls, and subnetworks | ||
Ability to define backup schedule and perform self-restores | ||
Self-service control panel to manage VMs | ||
Provide OS patches and version upgrades | ||
Ability to select from multiple backup and restore schedules and retention times | ||
Ability to select from multiple tiers of storage performance (e.g., high-transaction solid-state disks, slower file-based storage) | ||
Ability to manage multiple groups of VMs with separate administrators, operators | ||
Ability to customize OS templates | ||
Ability to customize metering, billing process, ordering/approval process | ||
Ability to install custom OS versions or customer-defined custom OS | ||
Meet consumer-specified security controls | ||
Consumer has visibility into security logs, real-time security threats, and activities | ||
Consumer has detailed real-time view into cloud operating, statistics, metering, and performance | ||
Ability to specify where data is stored (by country or datacenter facility) | ||
= Typically available | = Not typically available |
IaaS applications are defined by the provider in public clouds; the consumer is pretty much limited to the OS templates and versions and standard configuration options the public provider allows. As a part of a larger group of public customers, your ability to customize the offering is limited—applications and settings within the OS that is installed on the VM itself are the only aspects over which you would have complete control. A public cloud provider might allow you to create or import your own VM templates. Private cloud services are essentially a unique instance of the cloud service; you can customize them to a much greater degree.
The typical architecture of an IaaS application involves the creation of one or more server farms within multiple datacenters. The server farms each contain high-density blade servers in order to fit as many physical servers in a single rack as possible. Racks are installed in numerous rows, each one having at least two redundant power distribution units and cables into the datacenter power plant. The power plant also has various power backup resources, such as uninterruptible power supplies, batteries, and backup generators.
Multiple pools of servers or server farms are often located in the same datacenter both for expansion and local failover due to maintenance or continued operations during a hardware failure. For most large cloud providers, secondary datacenters are also deployed, as a geo-redundant system for both maintenance and facility-level failure protection. The cost of the datacenter facility, heating and cooling, power, and operations personnel are significant enough by themselves that most organizations heavily consolidate or avoid building them entirely—yet one more reason to use a cloud provider, instead.
Shared within the racks, or nearby, are the disk and storage systems—often in the form of a SAN or equivalent storage system. The storage systems are normally scalable independent of the server racks and utilize their own technologies to handle data de-duplication, thin provisioning, backup and recovery, snapshots, and replication to secondary datacenters. I will not spend a lot of time on SAN technologies here, but these modern storage features afford the cloud provider significant cost savings through technology innovations and the sheer volume and quantity of storage. These savings are passed on to consumers of cloud. These costs are often lower than anything an individual consumer could negotiate and deploy on premises.
Within each physical or blade server in a rack, the cloud provider will have a virtualization hypervisor such as VMware, Microsoft Hyper-V, Citrix, or KVM. The configuration of these hypervisors is normally hidden from consumer visibility by the cloud provider. A cloud provider has significant ways to share each physical server across multiple customers; one physical or blade server can host as many as 20 to 50 customer VMs, each one having its own OS, applications, and disk storage allocation. The cloud provider can use advanced hypervisor configurations to automatically scale up processors and memory as needed to the VMs based on workload and usage. Additional tools give the cloud provider the ability to failover one VM to another physical server within the same rack, a separate rack, or even across datacenters, all without the customer even knowing the shift occurred—this is called high availability. This is a perfect example of the technologies within the cloud architecture that benefit both the cloud provider and ultimately, through cost savings and reliability, the consuming organization.
PaaS is often confused with IaaS. PaaS combines the basic VMs and infrastructure from the IaaS model and adds software preconfigured in the VM to create a platform.
One example of a PaaS offering is a VM preconfigured with a database management system, all ordered via a single service catalog item. Platforms often consist of multiple VMs that form a multitiered application stack. Using the database example again, a multitiered application might consist of two frontend web servers, two application servers, and a clustered database server—six VMs in total, configured as one application platform or PaaS offering. Note that you can also classify the hosting of web pages—sometimes across numerous datacenters and providers called content delivery networks (CDNs)—as a PaaS offering.
The PaaS cloud service provider has already done the work of properly sizing the VMs and installing the OS, application software, and tools necessary for the customer to begin using the system immediately after provisioning. Technically, a customer could have ordered one or more VMs from the list of IaaS offerings and then installed its own database software, applications, and other tools; however, this requires technical expertise and time on the customer’s part. Even more important is that in a PaaS offering, the cloud service provider now manages the entire platform, not just the OS, so all upgrades, patches, and support are handled by the cloud service provider. This is what makes PaaS unique compared to an IaaS offering.
Figure 1-5 demonstrates how the cloud provider has more operational responsibilities for PaaS and SaaS applications compared to IaaS.
SaaS includes many types of applications such as commercial off-the-shelf (COTS), open source, cloud provider–proprietary, and customer owned or developed. The application along with its required server, storage, and network infrastructure are hosted by the public cloud provider or optionally on a customer or third-party premises. Typical examples of SaaS include email services, collaboration, instant messaging, document libraries, and CRM.
Organizations often have too many applications to list, but it is important to remember that without significant recoding many legacy applications are not suitable for deployment in a cloud service. Many months or years of application transformation (a topic worthy of its own book) are often necessary. In the meantime, there are techniques used mostly in private, community, and hybrid clouds that make porting of simple legacy applications possible while a full recode of the more complex legacy applications is performed.
Because each application in a SaaS offering is unique in its infrastructure requirements, licensing, cost, and deployment models, there is no single solution that cloud providers use. A smart cloud provider takes advantage of as much of the IaaS architecture described earlier. This means using a shared-storage system or SAN, virtualization of server hardware when possible, and redundancy and load balancing across multiple server farms and datacenters. SaaS cloud providers can implement dedicated server farms and applications for each consumer organization, but this is not nearly as cost effective as sharing a single instance of each application across a multitenant configuration. Additional benefits include the ability to deploy bug fixes quickly and upgrading software to the latest version, precluding the need to support numerous older software revisions.
Depending on the type of software, the manufacturer, the built-in security controls, and login and authentication systems, the cloud provider uses a combination of native software tools and custom-developed programs to maintain separation (or multitenancy) between consumer organizations. This means one consumer cannot see data, user accounts, or even the existence of any other consumer.
Public cloud providers include very economical licenses (after a significant amount of renegotiating with the software vendors) for software used in their SaaS offerings. In some cases, this is no mean feat, because the licensing models of software vendors do not often allow dynamic license expansion and contraction. In an average cloud system, the cloud provider takes on the responsibility of purchasing and maintaining a pool of licenses for all software products, and often across dozens of software manufacturers. This means that consumers do not need to bring their own licenses or purchase traditional software licenses of their own; they simply “rent” a license from the cloud provider. In a private cloud deployment model, you might not have as much leverage with software vendors to negotiate pay-per-user elastic licensing; however, it’s possible that you will be able to use existing Enterprise License Agreements (ELAs) that your organization might already own and prefer to maintain.
Table 1-6 shows a comparison of the common capabilities and limitations of public cloud SaaS applications compared to traditional IT or private cloud application hosting. As you can see, a public cloud SaaS offering might not provide the same level of customization or features as a traditional IT or private cloud based-application—this is mostly because the public cloud SaaS application is a shared system, whereas private and traditional is dedicated to one customer.
Feature | Public | Private |
---|---|---|
Backend infrastructure (server, compute, disk) provided and managed by cloud provider | ||
Licenses furnished by provider, included as part of the per-user fee to consumer | ||
Application updates and patches by provider | ||
Define backup schedule, perform restores | ||
Self-service control panel to manage VMs | ||
Provide OS patches and version upgrades | ||
Selection of additional storage or application options | Limited | |
Ability to customize application features | Limited | |
Host legacy customer applications and maintain app operations | ||
Ability to select from multiple tiers of storage performance | ||
Ability to customize metering as well as billing, ordering, and approval processes | ||
Meet consumer-specified security controls | ||
Consumer has visibility into security logs, real-time security threats, and activities | ||
Consumer has detailed real-time view into operating statistics, metering, and performance | ||
Ability to specify where data is stored (by country or datacenter facility) | ||
= Typically available | = Not typically available |
One key aspect of SaaS offerings is that the cloud service provider manages the entire system, including all servers or VMs, the OSs, and all the applications. Technically, a customer could order an IaaS offering (plain VM with an OS loaded) and install its own software applications, but then the customer is responsible for all upgrades, patching, and support. With a true SaaS offering, the cloud service provider handles all management of the system, including all future versions/upgrades. For a diagram of provider responsibilities across IaaS, PaaS, and SaaS offerings, refer back to Figure 1-5, earlier in this lesson. Across all cloud services, the consumer still has the ability to perform some configuration within certain limitations that the provider and application allow.
So far, IaaS, PaaS, and SaaS are the primary categories of cloud services that we’ve explored. Within these primary categories, there are numerous cloud services that have their own “as a service” names; technically, however, they are individual use cases or applications that fit within the definition of IaaS, PaaS, or SaaS.
To be accurate, Workplace as a Service (WPaaS) fits within the definition of an IaaS or even a PaaS. Similar to IaaS and PaaS, numerous physical servers, each with a hypervisor system, are pooled to offer a multitude of VMs to the consumers. These VMs are very similar to the VMs in an IaaS offering, except that they are normally installed with a desktop OS rather than a server-based OS. Microsoft Windows or Linux are common desktop OSs. Citrix is one of the most popular hypervisor technologies for WPaaS. Desktop as a Service or virtual desktop interface (VDI) are other names that cloud providers use for these hosted virtual-desktop platforms.
The VMs in a WPaaS solution also include application software for users such as Microsoft Office. Users log on to the VMs through the Internet or other WAN communications circuit and essentially “take control” of the virtual desktop. All processing (compute, memory, and storage) is actually running within the cloud service provider’s datacenter with only display, keyboard, and mouse activity transmitted over the network. The end user functions as a thin client using a desktop, notebook, tablet, or other thin-client terminal.
The applications that are shown for one consumer organization, or subset of users within an organization, might be different from other users. This is done through the configuration of roles and profiles in the OS and application software. Based on the user’s logon credentials, certain applications are available and preinstalled on the virtual desktop. There might be several levels or types of users that each consumer organization defines, such as Executive User, Knowledge User, or Task User. In this example, an Executive virtual desktop would have all available software installed, and maybe a higher level of storage and compute or memory, compared to that of a mid-level Knowledge user. A Task user might be a specific role for end users who only access a single program rather than an entire virtual desktop with a suite of applications. Of course, the cost that the cloud provider charges will depend on the definition of the users, the size of the VMs, and the cost of the software licenses for the apps.
In a public cloud service model, the cloud provider is responsible for all management and upgrades to the OS and all applications. In a private cloud model, the organization that owns/operates the cloud could take on full OS and application upgrade responsibility or allow participation from end-user departments to manage portions of the application stack or user profiles. The consuming users need only load the thin-client or remote desktop software tool that enables the connection through the cloud to the virtual desktop server farm.
You can also integrate custom applications into the virtual desktop solution, but this usually requires the cloud provider to host the application itself and assume full responsibility for the application lifecycle, including all management and future upgrades. Because the customer already owns the application, cloud service providers might not charge a license fee for these homegrown applications, but they will normally charge fees to manage them (e.g., backup and restore, upgrades, and patches).
Microsoft Virtual Desktop Infrastructure, Citrix XenDesktop, and Vmware Horizon View are some of the top desktop virtualization software platforms used by cloud providers or organizations with private clouds. Each of these software vendors has multiple product differentiators both at the server and end-user level. There are many other desktop virtualization systems that public cloud providers utilize behind the scenes, and they do not always publish which software platform they use.
Application publishing uses similar technology as the aforementioned WPaaS. This service has one or more individual applications available to end users instead of the entire OS desktop interface. This is ideal for several types of situations, including the following:
Task workers who only need to run one application that they use all day long (there’s no point in presenting a full virtual desktop OS for these users).
Application publishing for very large or legacy applications that need to be available to a large number of end users but are easier to manage when in the cloud rather than installing them on every end user’s computer. By publishing an application, end users can click an icon on their full desktop OS and run the application from the cloud. The application technically runs in the cloud with display, keyboard, and mouse activity transferring over the Internet. The application is never installed on the end user’s desktop computer, so whenever the application is updated, the user gets the latest version immediately; there’s no need to upgrade it on every end-user desktop.
Mobile devices (e.g., tablets and smartphones) are now very common and also greatly benefit from application publishing. This app publishing is particularly important for mobile computing because end users rarely turn their devices over to an IT department to install or routinely update software, especially when the mobile devices might be owned by the end user, not the company.
Application publishing is quickly gaining popularity with some public cloud providers such as Microsoft Azure. Azure has recently added a new service called RemoteApp. With RemoteApp, administrators can select any cloud-hosted applications that users are allowed to use. End users can run the RemoteApp application on a Windows, Macintosh, Android, iOS, and Windows Mobile device—running full Microsoft Office applications, for example, on their desktop or mobile devices. Behind the scenes, these applications are fully run on the cloud servers within Azure with only the user interface transmitted to the end user’s desktop or mobile device. This RemoteApp service is a good example of a cost-effective way to manage a wide variety of end-user devices and enterprise applications at a very low cost. Because RemoteApp is a cloud-based service, organizations pay only for the actual services used, and Azure automatically scales servers up and down to keep up with utilization.
The future of application publishing holds huge promise. There is only one major downside to using it: users must be connected to the Internet or connected to the cloud in some way to use their applications; there is no offline use. There is some newer application publishing software that incorporates an offline capability. These applications will still run over the cloud and require access via the Internet, but it will save a copy of itself on the local desktop or mobile device the first time it is run. After this background copy is finished, the next time the end user runs the application, it will still try to run the application online, but if there is no connection to the Internet, the application will run locally. When the user next runs the application while connected to the cloud, if the application has been updated to a newer version, it will still run remotely while quietly updating the stored version for offline use.
A Development and Test as a Service (also called Dev/Test) offering technically fits within the definition of PaaS with several unique features that facilitate application development, testing, and automated release management efforts.
Customers benefit from a Dev/Test by being able to quickly launch new VMs within minutes, perform testing of an application, and turn off the Dev/Test service when it is no longer needed. Because all the servers are hosted with VMs in the cloud, the consuming organization does not need to prepurchase and deploy a dedicated server farm, sitting idly when an application development team has finished their work, or between application releases. Dev/Test teams often utilize numerous VMs residing in on-premises private cloud facilities or in a public cloud. Then, there are multiple work streams, multiple development teams, testing and quality assurance teams, and multiple versions of each application being developed simultaneously—all of this benefiting from the elasticity and pay-as-you-go features of a private or public cloud.
Specific features of a Dev/Test environment are detailed in the list that follows (the basic offering and systems architecture is the same as IaaS, which means there is a pool of physical servers, each running a hypervisor system providing capacity for hundreds of thousands of VMs on-demand):
Some Dev/Test offerings provide the application development team with the ability to promote individual or multiple VMs to the next phase of the application lifecycle. For example, when all testing of an application release is completed in the Dev/Test environment, clicking a “promote” button would automatically copy or move the VMs to a staging or production network within the cloud provider’s datacenter, facilitating an Agile software delivery and continuous application delivery automation. An additional benefit to the customer is the ability to launch a new release of its application into production while maintaining the availability of its Dev/Test environment for the next release. If the Dev/Test VMs are no longer needed, they can be de-provisioned and the customer stops paying for them. Figure 1-6 shows a Dev/Test network with multiple isolated network segments for development, testing, and production.
Storage as a Service is an essential part of the IaaS and PaaS offerings because VMs need storage in order to run. Storage as a Service provides various forms of data storage, as described in the list that follows, via the cloud, which makes it possible for end users to access their data from any location, personal computer, tablet, or other device connected to the cloud or Internet. Storage as a Service provides low-cost elastic storage that expands and shrinks based on utilization.
Storage as a Service is distinct from requesting additional storage as an option for an IaaS VM: Storage as a Service is storage sold as a standalone product. Because this storage is not connected to a specific server or VM, it can be sold and configured in several forms.
It is important to understand that cloud storage is often sold and described in terms of the type of storage being offered, such as object storage. The underlying storage method (e.g., SAN, network-attached storage [NAS], and direct-attached storage [DAS]) is not disclosed to cloud consumers.
These are the forms of data storage provided by Storage as a Service via the cloud:
Pricing for storage is usually by the gigabyte (GB) or terabyte (TB) depending on the cloud service provider and the quantity of storage purchased. Because this is a cloud-based offering, providers normally charge only for the amount of data you have utilized rather than preallocated amounts. This pay-as-you-use storage model is one of the fundamental characteristics of cloud computing.
Daily backups should be a standard feature of Storage as a Service pricing. The cloud service provider needs to maintain their SLAs as much as you need them to protect your data, so a base level of backup is normally standard and included in the price. The provider might also offer more frequent backups (including real-time data replication) and long-term data retention options.
Backup and Recovery as a Service (Backup as a Service) is a category of service that replicates data to multiple IT systems and datacenters with the purpose of recovery should the primary data be lost or becomes corrupt. Backup and recovery of servers, applications, and data is nothing new to a datacenter or IT department. In a cloud environment, there are changes in the backup/recovery hardware and processes that are needed to backup and protect a cloud environment.
Backup as a Service is often sold and configured in two variations described here (in both variations, the underlying service is usually object-based cloud storage; however, public cloud providers charge less for these backup services as the data is normally written once, seldom accessed/read, and stored on slow relatively disk media):
Figure 1-7 shows an example of an online backup architecture in which data is backed up locally within the primary datacenter but also replicated, in near-real time or via snapshots, to a secondary datacenter or datacenters. In this example, the data is transferred via the Internet but this could also be performed via high-speed private data circuits.
Cloud providers often market additional “as a service” offerings to customers. Most cloud offerings technically fit within the definitions of the IaaS, PaaS, or SaaS models. There is a tendency for cloud providers to market and advertise their services by using newly invented terminology—something called Anything as a Service or X as a Service (XaaS).
The following list describes some of the unique XaaS offerings that have been coined (however, note that this is an ever-growing list, as cloud providers are constantly inventing new names for their services):
Based on lessons learned and experience from across the cloud industry, you should consider the following best practices for your organization’s planning.
Planning for the transition to the cloud is the most critical factor for success. Organizations often have a significant number of legacy computing resources such as servers and datacenters that will need to be transitioned or eliminated in order to achieve the true cost savings and flexibility provided by a cloud. Organizations can chose to modernize existing IT systems with cloud-like features such as virtualization and automation, or make a plan to completely transition some or all IT applications and infrastructure to a public, private, or hybrid cloud. Planning involves technical, financial, operational, and business process changes for your IT department and possibly your entire organization:
Consider what applications, server farms, storage, or entire datacenters might not be critical to the mission of your organization and your customers.
Identify and prioritize by application of business function—not necessarily by technology—IT services and applications that are commodity functions and not unique to your company (i.e., would shifting to a cloud provider or service relieve your IT operation from this burden with equal or better services and costs?).
What could your existing, or restructured IT department better focus on if some or all commodity IT services were outsourced or migrated to the cloud? Focus on applications that are unique to your business and your customers that could benefit from increased support, enhancements, resources, operations, security, and so on.
Evaluate current contractors, temporary employees, and resource levels. Nobody is saying that a cloud service should automatically include staff reductions, but would realigning current staffing (permanent or contracted) benefit your organization as a whole?
Evaluate how the existing IT department interacts with and provides services to the overall organization. Is IT a trusted advisor and facilitator of services to the business and your customers or is IT perceived as an obstacle (for whatever reasons) to the business needs? This is a good time to reconsider how IT provides IT services, how IT might centralize (i.e., broker) IT services from multiple departments, possibly removing some legacy silos or unnecessary tiers or structures.
Analyze the cost of all current IT infrastructure systems, datacenters, applications licensing, data storage, and ongoing operational costs.
Analyze the cost of existing IT personnel, contractors, and any supporting vendors or service contracts.
Begin identifying any particular applications or legacy systems that you know or suspect are outdated, cost more to support than they are worth, and could be retired or replaced.
Include financial calculations for datacenter, network, server, storage, and application lifecycle replacements, depending on the useful service period (usually three to five years). Consider infrastructure costs for items for which warranties have expired, are about to expire, or are such a significant ongoing cost that an alternative might be considered.
Remember to include indirect costs such as power, cooling, building facilities, leased assets, or other costs that the business incurs but might not have been directly budgeted from an IT perspective (until now).
Attempt to break down all costs to a per-user and per-application basis so that you can truly understand the cost per user per month (or year) for your IT operations. This is often the most difficult and underestimated aspect of financial planning—most organizations underestimate or simply have never calculated all of the direct and indirect financial costs.
Calculate where your major IT assets are in terms of their depreciation schedule, because this will affect your financial plans, return on investment strategy, and possibly the entire idea of transitioning to the cloud.
Decide your current technical infrastructure, operational processes, facilities, equipment and software lifecycles, current staff skillsets, and effectiveness of your current IT department:
Evaluate which applications, data, or workloads are mission critical for performance, availability, and security and whether hosting on premises (enterprise IT or private cloud) is required.
Determine a draft list of candidate workloads that might be commodities and outsourced to a cloud provider (or not).
Evaluate current personnel skillsets and whether services hosted in a cloud might be more beneficial (the cloud provider likely has more specialized and skilled personnel).
Assess and document which data might be required by policy, preference, or regulation to be hosted in a particular country or region.
Which of the business’s mission-critical applications (that your customers use) can most benefit from a dynamically scalable, elastic environment to handle spikes in the workloads or regularly expanding and contracting workloads.
Determine the security protections and risk profiles for each application or dataset.
Consider which applications, data, and workloads are best suited to be hosted in a low-cost public cloud and which might require a more customizable private cloud (possibly one hosted on premises):
Decide which workloads and data could be hosted in a public cloud and any concerns or risks with hosting in a public cloud environment (regardless of provider at this point). Then, consider which applications and data are too sensitive, too specialized, or otherwise would be best hosted in a private cloud (hosted on premises or in a managed private cloud hosted by a provider).
Determine whether your existing IT organization and staff should or could manage (continue to manage?) all of the applications, server farms, networks, and storage. and which workloads could be moved to the cloud. It is recommended that the internal IT department focus on applications that are critical to your business customers and that are unique (not a common commodity IT service that anyone can host). This consideration of who will host, manage, and operate any future clouds or modernized IT services combines technical, financial, and business planning.
If there are significant requirements for a private cloud, consider whether your organization can use a managed private cloud hosted by a provider or if deploying an on-premises private cloud is required. I recommend leaving out the financial and operational considerations of this decision initially when making this assessment.
Industry trends as organizations choose cloud deployment models:
Although experience and industry trends show that customers have a preference for the economics provided by public cloud, it is private clouds that offer more flexibility with customized features and security.
Small businesses often have little existing internal IT assets and are more likely to use public cloud SaaS models to meet their needs. Many of these small businesses start out using a SaaS offering rather than building something on premises and migrating to a cloud-based SaaS.
Larger businesses and public sector organizations often have a significant number of applications and data that they determine is best hosted on premises using a private cloud. If some workloads can benefit from a public cloud, the private cloud is enhanced with hybrid cloud management capabilities so that some workloads are provisioned to one or more public cloud services.
Trends over the past few years show little adoption of this community model. The primary concern with a community cloud is how the cloud is managed. Standards of communications, cloud management systems, and the services offered need to be agreed upon and upheld across multiple departments or organizations—not just initially, but for many years. This is where business challenges begin to reveal themselves. What happens if, in the future, one of the community cloud departments or organizations changes its business, budget, security standards, or other priorities? What if that department was offering critical resources to the community cloud that will no longer be available at the same level as originally agreed upon?
A recent industry trend is public cloud providers launching new hybrid services. These hybrid services focus on integrating traditional enterprise datacenters, typically on a customer’s premises, with public cloud services. With this hybrid capability, customers can federate authentication systems, synchronize data, support both enterprise and cloud applications, and failover enterprise servers to public cloud VMs. Consider this type of hybrid cloud (public cloud reaching back into internal enterprise IT) as a technique for early adoption, bursting, or long-term migration to a public cloud.
Often the easiest workloads to shift to a cloud computing environment are software development and testing tasks. Considering the immediate on-demand elasticity and low cost of basic VMs in a public cloud, it is often difficult to justify the legacy method of staging and hosting your own internal servers for Dev/Test purposes.
Consider what security level, if any, that the Dev/Test environment must meet. Dev/Test environments don’t usually host production data that might have security or privacy concerns. Because of this, development, coding, quality testing, and user testing utilizing a public cloud provider is a very attractive service offering.
Remember that a basic IaaS application that offers VMs is not necessarily suitable for a complete Dev/Test environment. The more robust Dev/Test environments often have ALM tools for developers to store programming code, testing tools, and the ability to promote development code to testing or production. When evaluating public cloud providers or building your own private cloud for Dev/Test purposes, consider what tools and operating systems are supported by the cloud platform.
Consider what ALM tools your company or developers utilize and whether the cloud provider will make these tools available, at potentially lower cost in a shared model, or how you can install your own copies of the ALM tools into the cloud service.
Consider implementing multiple virtual networks or subnetworks within your Dev/Test environment to simulate development and production networks. You might also want the ability to promote your Dev/Test applications into production subnetworks using ALM tools, as mentioned earlier.
Consider how you will save your Dev/Test cloud systems when there is a pause in the project. Will the cloud provider still charge for idle Dev/Test systems that continue to take up storage in the cloud? How long will the cloud provider maintain the idle (turned off) Dev/Test systems before deleting them?
When considering hosting virtual desktops in the cloud, assess the users and their applications and whether they are suitable for hosting in the cloud.
Consider that most virtual desktop services do not work in an offline mode (when disconnected from the Internet or corporate network); this might require data replication back to local laptops or tablets so that users can work offline while traveling, for example.
Consider where the data will be located for your normal desktop users and applications versus those who might use virtual desktops from the cloud. How will these users interact or share the same experience if there is data held in the cloud and also data back on the internal network?
Carefully test the cloud-based virtual desktop and applications for printing capability back to a local printer wherever the end user might be located (e.g., a hotel, at home, or at a remote office location).
Consider application publishing of one or more applications for those that a large number of users need to access from any office, location, or Internet/cloud accessible device. This is often more agile and easier to accept (by end users) than a full virtual desktop environment.
Consider the IT skills and effort required to manage a full remote desktop or application publishing environment. The creation of OS images, applications, profiles, and permissions is significant—not necessarily more than would be required on an internal traditional desktop environment.
1 Source: Simpson Garfinkle, “The Cloud Imperative,” Technology Review, October 3, 2011, http://bit.ly/1cCr5Ux.
2 Some providers also offer physical servers within their IaaS offering. However, these are more expensive, because the provider has less ability to virtualize, load balance, and move workloads; in addition, the provider must have dedicated physical servers idle as excess capacity waiting for customer orders.