Chapter 1. What Is Cloud Native?

The research findings are not surprising. Today, the most successful companies and organizations are those that most quickly and efficiently develop and deploy innovative software products and services to increasingly demanding customers. They are doing so by building applications in increments, continuously releasing better versions in a cycle of build-evaluate-learn-improve.

Consider a venerable business such as consumer banking. Traditional brick and mortar banks are heavily challenged today by so-called fintechs. These are banks without walls or front doors, operating their businesses mostly or entirely in the cloud via the internet. Time and again, they have shown an ability to beat traditional banks to market with new products like high-interest savings accounts and certificates of deposits with fractionally higher rates of return. They were ahead of traditional banks in offering their services to smartphone users, thereby gaining significant market share with millennials—the biggest smartphone users of all. Fintechs have mastered both lower operational costs and faster time to market of mission-critical investments. They understand that speed and efficiency are the true coin of the realm in banking.

Enter Cloud Native

The same holds true across the entire spectrum of vertical markets and organizational types. There is tremendous pressure to develop and deploy software and services faster and with far greater efficiency. These requirements, however, bump heads with traditional means of developing and operating software, which are anything but efficient and speedy.

IT and business leaders alike are acutely aware of these challenges before them. They are also aware of the potential of cloud computing to offer businesses cost savings as well as flexibility to respond quickly to change. Thus, the concept of cloud native has quickly emerged as one of the hottest topics in software development. It is changing the way organizations think about how they will develop, deploy, and improve applications, all in tight alignment with underlying business requirements.

Cloud native is an innovative approach to building and operating essential applications that fully take advantage of the now-familiar cloud-computing model. It is the biggest event to come along in software development since the adoption of virtual machines and virtualization 20 years ago.

Traditions Not Worth Keeping

It will be useful here to take a brief look at traditional software development methods, which are no longer in step with the needs of most businesses today.

IT leaders in almost any organization will gladly offer their opinion on traditional development methodologies, also called waterfall. That opinion will be, “There has to be a better way to do this.” That better way, as IT leaders know, is Agile development, also called Rapid Application Development (RAD). Where waterfall is defined by a plodding, linear or step-by-step approach to developing software, RAD stresses an iterative, team-based approach.

As we show in this ebook, traditional linear development is painfully slow, with new applications often taking so long to develop that changing business conditions can render them useless by the time they are deployed. They are chronically difficult to patch and update, let alone improve with new features. They require significant manual processes, making them very costly.

RAD would seem be the preferred methodology. However, RAD has problems, too. What if some developers aren’t as dedicated as they need to be to make RAD work? What if they aren’t close in proximity to other team developers? What if the nature of iterative development leads to constant rethinking or refactoring of initial project designs and architecture?

The good news is that these and other shortcomings of RAD and Agile development do not signal a reembrace of waterfall methodologies, with all their unpredictability, burdensome manual operations, operating system and infrastructure dependence, and the resulting silos of information. This is where the cloud-native approach comes in. As the CEO and founder of Heptio puts it, cloud native is “about changing [the] way enterprises approach technology, moving from a world where technology is static to a world where it is delivered as a set of services and can move at the speed of business.”

Or as Deloitte points out, cloud native offers companies a pathway to markedly decreased time to market for business-critical applications; significantly lower costs of development; the ability to run these applications across a broad range of infrastructures; and even enhanced application security. What’s not to like?

Understanding Key Concepts of Cloud Native

To most people, cloud native is not exactly an intuitive concept. Not only that, but understanding what cloud native is as well as its importance to the software development community, and indeed to enterprise-class organizations globally, means coming to terms with, well, cloud-native terms. And there are many such terms defining various elements of the cloud-native approach to software development. Most of these terms are inter-related, and therefore understanding what they are individually is instrumental to grasping cloud native as a whole. The following section defines many of these terms, which will be placed in their appropriate contexts throughout the rest of this ebook.

DevOps

The best way to think of DevOps is as an increasingly popular set of development practices designed to break down long-standing barriers between development teams and IT operations staff. These barriers have been a major cause of excessively long development and deployment cycles, cumbersome updating and patching operations, and excessive costs caused by the fundamental disconnects. For example, development would produce a truly innovative new customer-facing product, only to find that operations cannot spare the computing cycles to test it or run it. Or, operations staff informs the developers after that fact that their development work skirted essential security rules.

By contrast, the chief goal of DevOps is as straightforward as it is universally lauded: to shorten the systems development life cycle while allowing speedier features delivery, fixes, and updates. Moreover, all of these essential tasks are carried out in lock step with overall business goals. At the heart of DevOps lays a seamless relationship between development teams and IT operations, which, as we’ve noted, has been terribly out of sync. DevOps is central to defining a culture that looks strategically at the software delivery chain in its entirety, taking advantage of shared services while promoting the use of new development tools and best practices.

Another important factor in DevOps is that it promotes and prescribes regular, continuous integration and the continuous delivery of new and improved code (called CI/CD for short). DevOps also features continuous deployment, characterized by an even higher level of agility requiring further testing in production cycles. With the old barriers between development and IT operations obliterated in DevOps, new software features can be rolled out every few weeks or even every few days—more frequently in some cases. This is in stark comparison to the way software traditionally has been released, which is as enormous, monolithic blobs of software dumped onto unsuspecting operations staff.

But as many early adopters of DevOps have found, it is not a panacea. The usual problem is that organizations often regard DevOps as a largely technology phenomenon. The reality is quite different. DevOps involves a significant cultural shift with organizations that traditionally have enjoyed their private fiefdoms with the IT environment. In fact, some analysts feel the overwhelming percentage of DevOps efforts are doomed to fail, in large part owing to not recognizing how difficult internal cultural change can be.

Continuous Delivery, Continuous Upgrades, Automated Scalability

These three essentially are subsets of DevOps. They are vital to the overall success of DevOps efforts, and thus deserve some attention here. In software engineering, continuous delivery is defined as an approach wherein development teams produce software bit by bit or piece by piece in short development cycles. Continuous delivery also features extraordinarily high levels of automation. Ideally, this approach results in software that can be released reliably at any time and with greater overall speed.

Making use of CI/CD, continuous upgrades means pretty much what it says: namely infusing software with constant tweaks and improvements; for example, in response to shifting business conditions or new opportunities. But it’s more difficult than it sounds, largely because it bucks the historical norms of how software has been developed. Seen this way, continuous upgrades, as with DevOps, is as much or more about cultural change as it is about technology.

Automated scalability means replacing manual tasks, primarily software testing, but other steps in the building process, as well, with automation. To date, this works well in smaller projects but is proving far more challenging in larger deployments—which is the goal of scalability in the first place.

Containers

The way software has been developed traditionally and the ways in which it is developed in the cloud-native environment couldn’t be more different. For example, software traditionally is delivered with a lot of overhead, such as dependencies on particular hardware or libraries (e.g., prewritten code or configuration data) so that it can run properly. To get away from these dependencies and make it possible, for example, for the software to run on any hardware, IT put applications and these dependencies inside their own virtual machines (VMs). But these are usually huge in size, measuring in multigigabytes.

Thus was born the concept of containers, which have become increasingly popular in the past five years. Containers permit developers and development teams to package software, along with everything needed to run it, into a single executable package, such as a Java VM, an application server or the application itself. Containers make it easier to host applications within today’s popular portable environments because they can be moved from one environment to the other and run seamlessly wherever they end up. Properly designed, containers are very small in size compared to VMs—as in megabytes versus multigigabytes. And, very important, because they use so few resources and are so small, they start up instantly. Developers love that.

Thus, a container is hardware independent and therefore highly portable; for instance, between the test system and quality assurance system and the production system. Containers represent a very efficient way to combine software components into application and service stacks that dovetail with business requirements as well as for keeping software maintained and continuously updated.

Docker

Though the container concept has been around for decades, it was the emergence of Docker in 2013 that really ignited widespread container adoption. An open source tool, Docker makes it much easier to create, deploy, and run applications using containers. Because it is a tool designed for both developers and system administrators, it is a key element of most DevOps efforts. Being open source means that anyone anywhere in the world can contribute to making Docker better, and also extend its capabilities to meet their organization’s unique needs. Plus, developers can take advantage of literally thousands of programs already designed and created to run in a Docker container as part of their own application.

Kubernetes

Another open source project, Kubernetes makes it amazingly easy to deploy, scale, and manage containerized applications. Specifically, Kubernetes is the tool of choice for automating the deployment, management, scaling, networking, and availability of containerized applications. These tasks are broadly referred to as orchestration, defined next.

Orchestration

In container-assisted DevOps, deploying an application with all of its “baggage” in a container is helpful. But in a cloud-native environment, what developers really want is to exploit the many advantages of cloud platforms. In the cloud environment, development teams must oversee a variety of tasks involving configuration, coordination, and management of cloud computer systems, while also managing resources such as scheduling and software. Together, these tasks are called orchestration. There are several tools available to assist you and automate these tasks, including Mesos/Marathon, Docker Swarm, and, of course, Kubernetes.

Microservices

Think of microservices as an architectural style or methodology that structures an application not as some huge, complex megalith, but rather as a collection of services or protocols that are lightweight by definition.

Each microservice is typically centered around a business scenario and has its own datastore. Microservices acknowledge the fact that many applications are easier to develop and maintain when they can be subdivided or broken down into small pieces that work together (said to be loosely coupled). Each of these pieces is continuously improved, making the whole application the sum of these finely engineered parts. Applications built using microservices are considerably easier to test and to improve continuously, and much faster to design and deploy. Developers can also scale microservices independently.

Rapid Recovery

What you see is what you get here. Rapid recovery means quickly bouncing back from some sort of system failure, implying the ability to protect systems, applications, and data. It is often synonymous with duplication or redundancy. In the cloud-native environment, recovery from failure is often no more complicated than automatically provisioning the cloud resources needed (additional storage, for example) to ensure a fast return to full service after an outage or larger disaster.

Resilience

Considered by most to be a primary goal of DevOps, resilience (more specifically application resilience) defines the capability of an application to respond to problems in one of its many components while continuing to provide the overall services of the greater application. These problems often crop up when cloud-native applications are deployed across multiple technology infrastructures, a common task in cloud-native applications. This also applies to a Kubernetes cluster, commonly called K8s.

Multitenancy

Often associated with public cloud environments, multitenancy implies a single software solution serving multiple “tenants” or customers on a hardware that is usually serving multiple customers. Each customer has unique and sole access (has specific privileges) to this shared system. It is akin to a large office building housing many individual companies, each with access only to its defined space. However, all tenants compete for underlying system resources (like the kitchen or gym in the office building), which the cloud hosting company must skillfully manage.

OpenStack

Of growing interest to organizations moving to cloud-native environments, OpenStack is an open source platform specifically designed for cloud computing. It is often deployed as Infrastructure-as-a-Service (IaaS), wherein virtual servers and other cloud resources are made available to users. OpenStack consists of a set of tools for building and managing cloud computing platforms for both public and private clouds. It has earned both the respect and backing of the biggest software development and cloud hosting companies globally. Being open source, OpenStack implies that any user can access the source code and make any custom modifications and changes they want, and so without charge.

A Deeper Dive into Cloud Native

The nonprofit Cloud Native Foundation sums up cloud native very well as, “An approach that builds software applications as microservices and runs them on a containerized and dynamically orchestrated platform to utilize the advantages of the cloud computing model.”

All of the aforementioned concepts fit into the cloud-native mosaic in one way or another, separating cloud-native development from traditional application development. By definition, cloud-native technologies produce applications built using the microservices architecture pattern packaged within containers. They are then deployed and continuously managed through DevOps processes. And because all cloud-native development occurs within cloud environments, cloud native is much more about how applications are built and deployed, and not where. The applications live in the public or private cloud and are accessed by users accordingly.

In other words, think of cloud-native applications as the juncture of Agile development and cloud platforms, the combination of which deliver unprecedented speed, scalability, and efficiency in application development.

Cloud-native applications, as mentioned previously, are loosely coupled or stateless. In practical terms, this means that they are not tied to any one particular infrastructure. This makes it much easier for the application to scale horizontally, depending upon demands, because developers don’t need to worry about the ability of the underlying infrastructure to support the application. If demand peaks, you just provision increased server resources from the cloud. By contrast, in traditional development environments, operations teams manage the allocation of computing resources to support applications, with much of that management being manual (that is to say, costly) in nature.

Power Couple: DevOps and Cloud Native

Cloud-native development typically will include heavy usage of DevOps processes, microservices, and containers. These all are elements of Agile, scalable development designed to give developers and teams lots of reusable code in small packages that are easily updated, patched, and redeployed. For example, many complex functions of applications are broken down into microservices that can be turned off when they aren’t needed to conserve resources.

A hallmark of cloud native is continuous development of software that can then be deployed seamlessly across different cloud infrastructures. For example, development involving highly sensitive data can take place securely on a private cloud. But software testing and quality assurance can be moved to the less-expensive public cloud using dummy datasets. The overall impact is greater efficiency (lower cost) in development for cloud-native applications, while data security is assured.

Another hallmark of cloud-native development is much higher levels of automation (or fewer time-consuming and expensive manual processes), owing to the fact that so many cloud services are already automated. Increasingly reliable orchestration tools automate much of the application management in the cloud-native environment.

Finally, traditional software development utilizes traditional development languages such as C, C++, Enterprise Java, and Cobol, which was written when Dwight Eisenhower was president, although Java remains widely used in cloud-native applications. By contrast, development in the cloud-native environment takes advantage of best-of-breed modern development languages, frameworks, and tools such as WebSockets, Python, Flask, gRPC, and Protocol Buffers (Protobuffs), to name a few. This gives cloud-native developers a wide range of choice in selecting precisely the right framework or language for the job.

Of course, the best way to understand cloud native and its IT and business benefits is through the lenses of organizations that have taken big steps to move toward the cloud-native development approach. Following are four such organizations.

Case Study: Extra! Extra! The Financial Times Shaves Months off Deployments While Nearly Eliminating Errors

Though it was founded 131 years ago, The Financial Times of London has been something of a digital pioneer. It was the first UK newspaper to report bigger earnings from digital subscriptions versus print. Thus, it isn’t surprising that The Times was also an early adopter of continuous software delivery, taking advantage of containers, microservices, and orchestrators. The goal has been to respond more quickly to changes in the highly dynamic media business. Specifically, technology leaders at The Times had sought to significantly reduce the time to market of business-critical applications, some of which were taking as long as three months to deploy.

The Times overall cloud-native strategy began with systematically moving infrastructure to the cloud, starting with its own virtualized infrastructure and then later adopting Amazon Web Services (AWS). As Sarah Wells, technology lead at The Times, noted, “Custom infrastructure was not a business differentiator for us.” The Times now uses off-the-shelf cloud services, including Database-as-a-Service.

The Times also moved its monolithic content platform to about 150 microservices, each of which handles a single function or component. Multiple teams within The Times support each microservice in a so-called many-to-many scheme. This helps because teams work end-to-end on the delivery of key application features, such as “publish videos,” that often span multiple microservices.

Experience taught The Times content platform team that containers are the gateway to orchestration in this cloud-native environment. For example, by using large AWS instances to host multiple containerized processes—controlled with an orchestrator—The Times reduced hosting costs by an impressive 75%. As pioneers in the aggressive use of orchestration, the organization built its own orchestrator from open source tools but lately have been evaluating the latest off-the-shelf products such as Kubernetes.

The results of the company’s cloud-native movement are impressive to say the least, including a radical reduction in error rates from 20% to less than .1%. IT leaders attribute these reductions to their ability to release small application changes more often with microservices. And, as Wells points out, “Our goal of becoming a technologically agile company was a major success. Teams moved from deployments taking 120 days to only 15 minutes.” The impact on development teams has been “completely liberating.”

Case Study: Volkswagen Hits the Development Accelerator with Cloud Native

As the world’s largest automaker and Germany’s biggest industrial company, Volkswagen Group faced the challenge of standardizing and automating its vast IT infrastructure. And like so many other enterprise-class organizations, Volkswagen sought marked increases in the speed in which it innovated and delivered business-essential applications. With the entire auto industry embracing digital transformation, Volkswagen, too, wanted to embrace Agile software development processes to deliver state-of-the-art solutions such as smart parking and navigation. IT leaders at Volkswagen knew that attaining these goals would be a tall task, particularly given that the effort would be applied across all Volkswagen brands and divisions at its global operations.

Part of Volkswagen’s challenge resulted from recent growth, during which the company’s IT environment had become decentralized and heterogeneous. With different brands and divisions operating different platforms, the overall IT infrastructure was relatively costly, characterized by a wide range of development tools and specialized hardware. Development became very labor intensive and lengthy. Far too much vital IT time was being spent on operational issues, taking away from strategic development.

It didn’t take IT leaders long to realize Volkswagen needed a next-generation cloud platform to bring the company’s development efforts up to full highway speeds. In particular, these leaders wanted a new solution to unify and automate work streams and platforms across the entire, far-flung organization. New systems needed to replace the legacy infrastructure but still needed to connect to legacy platforms that still maintain vital data.

Also, Germany’s ultra-strict data privacy laws limited Volkswagen’s public cloud options in this modernization effort. Thus emerged the decision to transition to a private cloud environment. IT leaders believed open source cloud platforms would support speedier innovation and development cycles, and this thinking led to OpenStack, the leading open source private cloud platform. OpenStack eliminates the dependence on any one vendor, which Volkswagen liked.

The company opted for Mirantis OpenStack as its standardized company-wide cloud platform, with a cloud accessible not only to 610,000 employees but also to suppliers, dealers, and customers. Almost immediately upon initial platform deployments, Volkswagen noted successes with collaborations with the worldwide OpenStack developer community. With OpenStack and its cloud-native approach, IT leaders say Volkswagen has launched a DevOps culture with CI/CD tool chains for rapid testing and deployment of new ideas.

The results thus far of Volkswagen’s transformation to a cloud-native approach have been enviable. Time to provision platform resources has dropped from months to minutes, requiring just a few clicks. Early experience has also validated the lower infrastructure costs of the private cloud compared with the former infrastructure. Volkswagen has also added Platform-as-a-Service, a move that will include new tools to further speed up release cycles and improve overall application quality.

Recent workloads moved to production include a customer-facing car configurator website that was transitioned from a legacy platform in just six months. Moreover, Volkswagen has reinvented its IT operations and team culture, consolidating all teams into cloud operations teams to drive innovation and faster delivery.

Case Study: ASOS Making Fashion Statement with Cloud Native

ASOS boasts more than 18 million active customers, 21 million social media followers, and is closing in on $3 billion in sales. Not too bad for a company founded in 2000 in the ultra-competitive fashion retail business. Not surprising, ASOS’ immodest goal is to become the world’s top online shopping destination “for 20-somethings.”

A key element of ASOS current success is its embrace of a cloud-native approach to continuous development and deployment. As David Green, enterprise architect at ASOS says, “The ability to provide fast response times is key to our business.” Underlying ASOS’ efforts to “significantly improve” execution speed is cloud-native adoption, he maintains.

When the company was first formed, its tech visionaries built their own in-house platforms, eschewing off-the-shelf solutions. That early strategy gave way to the need for more rapid response to customer requirements as well as competitive pressures, fostering a closer look at cloud hosting. Today, ASOS services once housed in its own datacenters are migrating to Microsoft’s Azure platform, with the goal being 100% cloud by the end of next year. The main impetus was to relieve IT staff from burdensome operational responsibilities, allowing them instead to focus on strategic application development. The movement to Azure is accomplishing exactly that.

Then came the embrace of cloud-native approaches, accented by the heavy use of microservices. These services running on the flexile Azure cloud infrastructure has been central to ASOS efforts to boost application speed-to-market.

ASOS overall goal of speeding up deployment and updating of key business applications is reflected in its overall data architecture. Thus, a key attraction of the microservices architecture is ASOS’ ability to make more granular choices about how and where data is maintained, which further helps with critical response times.

The results for ASOS represent a validation of its cloud-native strategy. Black Friday sales for 2018 exceeded all expectations for scale and response times across key applications supporting such sales spikes. Looking ahead, ASOS IT leadership team is transitioning to take advantage of Azure’s managed stateful services. ASOS is also aiming to improve server resource utilization while it mulls using containers and orchestration, tools that are arguably less mature on Windows than on Linux.

Case Study: DISH Channels Cloud Native to Take on the Big Guys

Taking on behemoth competitors like Verizon and Comcast is not for the faint of heart. But with 14 million subscribers and 16,000 employees, multichannel video service provider DISH Network is more than holding its own. And what DISH lacks in size, the company is determined to make up with raw speed and agility, which often can elude a bigger competitor. Its strategy of choice in this tough battle is Agile development and a cloud-native bent. Only through innovation and speed can DISH deliver desirable new services to consumers faster and more reliably than the competition, including increasingly popular mobile services.

To best support a far-flung network of installation and repair technicians, as well as to markedly speed application development and deployment, DISH IT leaders looked to the cloud for its on-demand services and flexibility. They settled on the open source Cloud Foundry platform from Pivotal. Initial projects on the platform included applications to update offer and order management, and an application to streamline customer sales interactions, as well as installation and fulfillment services. All these applications are mission critical to DISH.

Because software development is at least equal parts technology and people, DISH began training developers in paired programming and test-driven development. Part of this emphasis on team development included renovation of a historic building in downtown Denver with the look and feel of a startup, with development teams working in close proximity.

Taking advantage of both this work environment and the cloud-native platform, DISH developers today are testing new applications and features far earlier and often, with decisions made at the line of code, not in separate conference rooms by individuals removed from the real development action.

The result has been a marked increase in time developers have now to prototype and innovate new ideas into new applications, no longer constrained by tedious, manual development tasks. What’s more, software release schedules are now measured in weeks instead of months. Teams that used to roll out quarterly releases now do so on biweekly schedules, meaning more and better updates and ultimately better customer-facing applications.

Cloud-Native Challenges

No technology or IT methodology is without challenges and drawbacks, and cloud native is not exempt from this rule. For example, in their zeal to take advantage of cloud native’s benefits, some organizations have made the mistake of trying to hoist legacy applications into the cloud. It doesn’t work. As noted earlier, these old applications come with lots of baggage and infrastructure dependencies that don’t work well in the cloud. It’s a better idea to safely decommission these applications in ways that preserve the legacy data but quietly lay the actual applications to rest. OpenText is one of several companies offering solutions for doing exactly this.

With DevOps being a central feature of the cloud-native environment, IT leaders often underestimate the difficulty in getting people to transition to the collaborative way of working that is essential to DevOps and cloud native. You might have all the Agile tools and technologies at your disposal, but they won’t be worth much if you have a less-than-agile workforce.

Also, because so many cloud-native tools and frameworks are relatively new and under continuous improvements of their own, deciding on which tools to use can be a tough call, especially for development teams just wading into the cloud-native world. There is no shortcut or magic bullet solution for figuring out what’s right here, perhaps other than peers who bare the scars of previous wrong decisions.

Summary

Cloud native is working across a broad swath of business types and organizations because it underlies application development that is tightly coupled to business goals. Cloud native defines an entirely new development paradigm replacing older methodologies with roots extending back five decades. With exceptional level of operations automation, cloud native can free highly trained developers and coders from the drudgery of operations management. Instead, they can focus on developing high-performance, highly innovative customer-facing applications capable of driving revenue and profit while building customer loyalty.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset