There is little doubt that the concept of cloud computing is still in its infancy and as such, the definition of what cloud computing means is still much debated. This book doesn't seek to enter into the debate, instead prefers to focus on the services and capabilities that Azure can provide.
A good (and impartial) definition is available at http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf provided by The National Institute of Standards and Technology (NIST), who defined cloud computing as follows:
"Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."
NIST go on to specify five essential characteristics of a cloud computing platform as follows:
Armed with these essential characteristics, it is apparent that there is a difference between a simpler hosting platform and a more fully featured cloud platform.
With a hosting platform, a company may employ VMware ESX hypervisor software on the private network to deploy and manage VMs. If a developer, for instance, requires a shared development VM on the company domain, it is usually not possible for the developer to start a VM in a self-service model: this would no doubt cause a few eyebrows to be raised by the IT infrastructure team! There is a finite limit on the total amount of resources assigned to the farm (disk, CPU, RAM, and so on), and this needs to be carefully managed. Also, there is a whole raft of questions and requirements around a regime of installing software patches, to mention just a few questions. In fact, in most organizations, a job ticket would need to be raised with the service provider (or internal IT infrastructure support team) for the VM to be provisioned. This proves a time-consuming and often onerous task.
Compare the description of a hosting platform with a cloud platform. With the cloud, the developer would be able to spin up a VM on a shared public platform, with the required supporting infrastructure and required specifications, on-demand. In Azure, for instance, this could be achieved via the web-based Azure Portal or using a scripting language such as Windows PowerShell. Costing would be pay as you go: as long as the VM is running, the developer will be charged a finely tuned fee based on the resource consumption of the VM (disk usage, CPU load, network usage, and so on). If the VM is switched off, the fee would just be for storing the VM image file. The VM can be spun up and shut down on-demand, as required, by the developer. If extra resources are required (for indicative software load testing, for instance), these can be assigned to the VM and the capacity is limitless. However, it would be the responsibility of the developer to install software and OS patches, so maintaining and supporting the VM.
So, we can see here that a hosting platform is missing many of the essential characteristics of a cloud platform. What many may regard as a cloud platform is in fact a hosting platform. Azure is firmly a cloud platform. A key difference is that in a cloud platform, computing power is a commodity and as such needs to be measured and easily provisioned.
A cloud platform can be deployed in one of two modes:
At the time of writing, Azure Stack is currently in Technical Preview. More information on Azure Stack can be found at https://azure.microsoft.com/en-us/overview/azure-stack/ .
Following on from this, an additional mode may be applied: hybrid. This typically describes a private or public cloud that is hooked up to one or more other, separate, cloud platforms (public or private). So this is an aggregation of at least two separate cloud platforms, each hosted on their own dedicated infrastructure, possibly providing extra capability to one or other cloud service provider in a way that is transparent to the user.
Azure is a public cloud owned, hosted, and operated by Microsoft, available to most organizations (and countries) across the globe. However, it is true that solutions can be built on Azure such that they are a hybrid. Consider, for example, a solution that is hosted primarily in Azure but leverages services in a company private data center running Azure Stack, a RESTful API. In this case, the solution can be considered a hybrid because it utilizes services provided by both a public and private cloud.
Another example hybrid solution may expose endpoints in Azure that forward requests to an endpoint in the local data center. Service Bus relays, for instance, provide this functionality. This is a pattern that is becoming more prevalent, as companies wish to leverage cloud solutions without opening wide the company on-premises firewall and proxy, relying instead on the security mechanisms offered by Azure.
Cloud providers typically break down their service offerings into three categories, which build on top of each other, as shown in the diagram here:
A diagram showing the relationships between Cloud Platform Services
Inevitably, it has become a great source of amusement to hijack the phrase ... as a service in humorous ways!
Jokes as a Service (JaaS)
It will be apparent to some that the ideas presented here touch on a great many old paradigms (indeed computing truths if one may be so bold, that is, concepts that have been proven true time and time again through many implementations and as such are proven to be beneficial). Cloud computing is an agglomeration of a great many old ideas: for one, the concept of a shared pool of computing power invokes parallels with mainframes, running advanced time-sharing operating systems developed in the 1960s; also, the idea of software services that offer high cohesion provokes memories of SOA.
The base enabler for cloud computing is virtualization of computing resources and in many people's minds, this then puts Azure on par with an operating system that is essentially an abstraction of computing hardware for the purposes of ease of understanding and to ensure optimal use of the underlying hardware. But it is apparent that Azure is much more than an OS since it provides services typically in the area that would be considered application software, running on the OS.
Azure touches on so many aspects of computing, which is fascinating and at the same time overwhelming, in terms of effectively unlimited services that can be provided. But it is worth taking heart that core principles and characteristics exist that provide a jumpstart to learning about cloud platform services, which we hope to have introduced in this section of the book. So, all the old learnings are still relevant and provide a pattern for the future; rather, it is a case of something old for something new!
Now that we have a good understanding of what cloud computing is and the benefits that it can offer, let's examine the heart of this book: integrating systems and applications using the cloud.
Software integration is the process of connecting disparate systems and applications together that would not normally talk to each other, allowing data and business rules to be shared to drive automated business processes that add value to the business.
Traditional on-premises integration is concerned with linking internal systems and applications together and communicating with other businesses. An enterprise application integration (EAI) product such as BizTalk Server is very good at this and provides useful features out of the box such as error handling and retry capability. However, it requires specialist knowledge, and also, it is now apparent that the demands of modern IT have changed the face of integration is several ways, as listed later, which has required new approaches to integration:
The nature of the cloud, with its elastic scalability and the investment of cloud providers in PaaS solutions that ease and speed up the development process (such as Azure App Service), are strongly positioned to solve these new integration problems of today.
As touched upon in the previous section, the cloud is well positioned to solve the new integration challenges, as the list of following properties demonstrates:
The risk associated with the new wave of cloud technologies is that the hype and excitement surrounding them lends too much focus on the technologies themselves and not enough consideration regarding how they can be used as part of the integration toolbox, to build robust (hence the name of this book, Robust Cloud Integration with Azure) and supportable solutions that are:
These characteristics can be achieved through good design, which should not be forgotten.
One aim of this book is to show that integration design for the cloud is as important as ever, to prevent a proliferation of hard to maintain and fragile integration platforms that cannot be changed and expanded on in the future.
Modern web based integration could be described as one of simplicity (for example, a single point-to-point solution), increasing to the complexities of the service-first approach associated with SOA, leading naturally to a fully decoupled integration layer with an inference engine, using technologies such as Enterprise Service Bus (ESB) and the simpler hub and spoke/publish and subscribe pattern of integration.
iPaaS solutions such as Azure App Service build on the service-first approach but to a more granular level (the microservice level). If a service represents a discrete function, the microservice idea goes one step further, breaking a service down into even more discrete micro functions.
The timeline here represents an example company's journey from no integration, to a complex mesh of many varied point-to-point solutions, to integration in the cloud over the course of a few years:
Diagram showing an example of point-to-point service connectivity
Further information about the routing slip pattern is available at: http://www.enterpriseintegrationpatterns.com/patterns/messaging/RoutingTable.html .
More information about the WS* Standards are available at: https://msdn.microsoft.com/en-us/library/ms951274.aspx .
Throughout this chapter, we have talked about Azure, PaaS, and the evolution of integration. The microservices architectural pattern has also been briefly touched upon and this will be fleshed out further in the following sections, because it is a pattern underpinning many of the current PaaS solutions.
In order to maximize the benefits of the cloud, it is essential to understand what architectural principles we should follow to maximize the use of cloud elasticity and also to be aware of the different design patterns that can provide increased granularity and isolation to a solution.
We have seen so far that cloud solutions are innovative: they have changed the way businesses are targeting potential customers today. If you have a product that is catering a large customer base, you can leverage cloud to have infrastructure and services running over multiple geographic regions and that too with no time. This was not the case a couple of years ago where you devote months to get hardware procurement and provisioning done. Cloud has eased the process of creating new customers and expanded the business horizon across multiple demographic boundaries.
As the business grows, the complexities around delivering services also increase. Today, businesses want to work with the SaaS approach to have continuous delivery along with continuous updates. No business house wants to shut down for a patching activity or any service feature enhancement.
We have seen business requirements where new features need to be added to a product, and this requires multiple updates to the hosted service within a single day. We also have seen use cases where a business needs to do scale up/scale down based on the current and future demand. How can this be done, whether the software solution is simple or complex? The answer is to follow correct design while building the software.
In this decade, we have seen software design evolving. It has changed from desktop-based applications to applications running on the Internet and on devices. In the following diagram, we tried to summarize evolution stages:
The evolution of software from the desktop to microservices architecture
From the earlier diagram, we can easily analyze the changes to software.
We started building software for desktops, and with immergence of networking and Internet, we started slicing software design vertically into layers or we can say divided the software into tiers. This is where we have all heard terms such as client server architecture or two-tiered architecture, three-tiered architecture, and multi-tiered architecture. The main objective of tiered architecture was to divide software responsibility into layers.
A simple diagram for three-tiered architecture
If we look at three-tiered architecture, every layer has to perform a certain set of functions. The UI layer is responsible for a user interaction; the business layer takes care of business logic, whereas the database layer is responsible for storing data.
With the emergence of cloud virtualization, infrastructure automation, continuous delivery, and domain-driven design, businesses started looking for SaaS-based approaches to provide services to end users. Tiered architecture has certain limitations to this and that's where microservices fits the overall concept to design distributed systems.
A Shared DATA model for Monolith Architecture
With a shared data approach, we have structured data, but it is not agile. Today, business is changing fast; to accommodate these fast-paced changes we need to move from the shared data approach.
The list does not end here: you can find multiple content over the Internet that discusses other limitations of the monolithic application design.
Because of the limitations of Monolithic Architecture in designing distributed system* discussed earlier, James Lewis and Martin Fowler first came up with an application architecture model named microservices. It has gained popularity as the basis to build distributed systems.
So what is microservices and what are different characteristic of microservices?
Microservice can be explained as self-contained small unit of functionality, which will be used for specific business capability.
In simple words, microservices are an independent unit, and it follows the principle of single responsibility. Single responsibility means each microservice has a set of well-defined features and has a boundary and should run on a separate process.
This is the pattern where we divide an application into component parts, and each component is an independent unit of business. The basic principle of microservices is that it should not overlap with other services or share any common data storage. This way, microservices provide a layer of abstraction and isolation to other microservices in a distributed system.
While we are discussing the microservices architecture, it is very important to understand the set of common characteristics that each microservices will have.
A distributed system A distributed system is a model in which component on the network communicate with each other by sending and receiving messages. The message format can be of multiple types such as flat files, XML, and JSON. To learn more about distributed computing, refer to the Wiki link, https://en.wikipedia.org/wiki/Distributed_computing .
The following points show the characteristics of the microservices architecture:
The decentralization of data storage
From the earlier diagram, we can easily see how microservices are independently structured and do not share any common data storage.
microservices should be independent to choose data source of its choice; some may choose a relational database, some may choose NoSQL, and another might use queues, a filesystem, and so on. This is the way we are removing dependency across multiple microservices.
A component model with microservices
Each microservice has well-defined boundaries, and together, they make a complete service offering. While we think of microservices, we always ask how big the microservices should be. We would say divide services as independent chunks such that a small team can handle the overall responsibility. Another driving factor is how easily you can enhance, replace, or upgrade component without affecting functioning of other services.
"Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations".
Further information is available via https://en.wikipedia.org/wiki/Conway%27s_law .
Conway's "Two Pizza Theory" of team distribution
Based on Conway's Law, Amazon came up with the Two Pizza Theory. It states that divide your team so small such that it is possible to feed them with two pizzas!
If we combine Conway's Law and Amazon's Two Pizza Theory and try to think in terms of the microservices pattern, we can say that to have an optimal output for the business, it is a good choice to have teams organized around business capabilities rather than teams driven by technology. This will give service ownership to a team and the team will have full control over service changes as long as it does not break consuming services.
If we take the example presented in the earlier diagram, a team dedicated to SHIPMENT will function better than a team responsible for the whole business. Make your team master of any specific business area instead of training them in everything. A team claiming to have knowledge of everything might not provide you the same output as a small dedicated business team can provide.
Automated testing, automated deployment, automated scale up/scale down of systems: all these are key aspect of microservices. When you design microservices, you should keep in mind where you want to run your services-on premises or cloud-based architecture. With cloud-based infrastructure, you have a lot of flexibility toward automation.
What you build today will need to be modified in future as per business requirements.
Keeping this into consideration, you design your microservices for failures. Failures can be technical failure, hardware failure, or implementation failure; your application design should gracefully handle these exceptions.
An Example Microservices Architecture with a Faulting Component (the Payment Service)
Consider the earlier diagram; if the PAYMENT module for the enterprise is not working, with the concept of microservice, it should not halt the whole business. Other modules should continue to work such as Order, Wish List, and so on. This way, each component execution is independent of the others.
In a real distributed system, you will be dealing with multiple servers, multiple log files, and maybe multiple network as well. So how will it go if some service starts troubling you, it will be a nightmare to monitor all the moving parts!
This is where the concept of smart monitoring and decentralized governance comes into place. If you are working on a cloud-first approach to design a distributed system, Azure provides you lot in the smart monitoring space. Throughout this book, we will discuss the concept of different monitoring techniques.
In the earlier sections, we have discussed a lot about monolithic design and microservices. The following table summarizes the key differences between the two architectures:
Monolithic |
Microservices |
In monolithic application, functional units are not autonomous |
As the microservices concept is designed on the principle of single responsibility, each microservice is autonomous |
The monolithic architecture approach is good when designing a small application as it hides the complexity |
The microservices pattern is most useful when you try to build distributed applications |
The monolithic approach is good when your application does not require regular features update |
Microservices benefits maximum when your application required frequent updates and feature enhancement |
If you have hardware limitation, it is better to go with the monolithic design approach |
If you are looking for application design that you can easily scale up/scale down and hardware is not the consideration, then it will be better to have built it through a microservices design pattern |
You can have a single version of software running on any specific hardware |
You can have multiple versions of the same microservice running |
Monolithic applications are language dependent; you need to develop each layer in specific language |
With microservices you can choose language of your choice; you may develop one microservice in Java another in Node.js and the rest in C# |
Differences between the Monolithic and microservices Architectural Styles