Chapter 3: Cloud-Native Architecture Patterns and System Architecture Tenets

So far in the book, we've covered the fundamental principles of a cloud-native architecture as well as the tools in a cloud-native app development pipeline. This chapter is all about the operational best practices that you must adopt in order to create resilient, scalable, and secure cloud-native applications. As a developer, you are perhaps considering skipping this chapter as you're well aware of the design patterns involved in software development. But wait. The reason we've included this important chapter is that design patterns in cloud-native app development are different from traditional software development. More importantly, by the end of this chapter, you'll have a practical understanding of using different design patterns and best practices to address specific development challenges, client needs, and platform requirements.

This chapter is divided into three parts. The first part gives a brief overview of cloud-native patterns and their scope. The second part deals with the actual challenges solved by cloud-native patterns, and finally, the third part covers the actual architectural patterns that you can use to achieve your development goals.

In this chapter, we are going to cover the following topics:

  • Cloud-native patterns
  • Solving challenges with cloud-native patterns
  • Cloud-native design patterns
  • Hybrid and multi-cloud architecture recommendations

Cloud-native patterns

Like traditional design patterns, cloud-native patterns also help solve architectural problems by identifying common challenges of a development process and modifying workflows in a way that problem-solving becomes a part of the development process, thereby making app development significantly faster.

In order to create such patterns, first, a problem needs to be identified. Design patterns are usually created to solve recurring problems that can happen in any environment and while working on any project. The other part of design patterns is the solution to the problem. Due to the nature of the problems that design patterns solve, the solution needs to be broad enough to be applied to hundreds of different scenarios and still resolve the problem with equal effectiveness.

There is a third part in design patterns, but it isn't talked about enough and thus is often ignored – the trade-offs. Cloud-native patterns are a conscious design choice aimed at solving a wide range of problems, which means they are undoubtedly going to have limitations. Therefore, when applying a pattern, the developer needs to know the consequences of that design and decide whether it's going to affect the app negatively.

Furthermore, design patterns are also not just best practices or set-and-forget solutions. Some of them can be thought of as broad strategies to avoiding problems but the vast majority of cloud-native design patterns are methodological approaches to specific problems. They have a premise that contains the problems and a process to overcome them. We are calling them approaches here instead of workflows or processes because design patterns can be modified to an extent that they can be applied to different applications, different infrastructures, and different platforms.

The scope of cloud-native patterns

Design patterns are developed to solve recurring problems in app development. As technology progresses, new problems arise and so do design patterns. As a result, there is no defined number of design patterns. That said, cloud vendors and platforms recognize some common development challenges and best practices to avoid them and have packaged them into cloud design patterns to guide developers.

Since there are hundreds of cloud-native patterns available out there but a limited number of pages in this chapter, we are going to limit our discussion to a handful of the most powerful cloud-native patterns that are central to cloud-native app development. These core tenets will prove to be the most important for developers who are just starting out with the cloud and want to develop reliable, scalable, and secure applications.

In order to facilitate your understanding of these patterns and give you more context on why and when to use them, we'll also discuss the main challenges in cloud app development.

Solving challenges with cloud-native patterns

Picture this: you're an hour into creating your very own cloud-native application and you're face to face with your first major hurdle. It could be anything – a client request that you're not sure how to implement, misbehaving code, or any problem in any one of the dozens of spheres of app deployment: scaling, security, resilience, monitoring, and so on.

Now you can either stop development and focus on coming up with a solution for your problem or you can completely avoid this situation by taking a step back and plan your development process around cloud-native patterns.

You're almost certainly going to face challenges in any form of app development but that's fine because a solution for almost all of your problems already exists. But the goal isn't to use them to solve roadblocks but to completely avoid roadblocks in the first place. And the first step to do that is identifying the types of challenges you might face.

In this section, we'll look at some of the challenges that exist in modern cloud computing and how they can be overcome with cloud-native patterns.

Be proactive, not reactive

Design patterns are a forward-thinking engineering approach – it is not to be thought of as a failsafe but rather an active attempt at avoiding known problems before they arise. Thinking ahead saves time, effort, and money in the form of the following:

  • Fewer errors
  • Reduced downtime
  • A more resilient and scalable application

To achieve any of this, you need to identify the main challenges that other developers face during cloud app development. There are hundreds of small issues that pop up at different stages of the development cycle, but a vast majority of these issues can be categorized into five main areas. The design patterns that we'll look at later in the chapter will also solve various challenges that fall under these categories.

Scaling and performance

The performance of your application is highly dependent on how it scales, and scalability is one of the main benefits of using a cloud-native architecture. Your application's performance refers to a wide range of metrics, such as latency and load times. These indicators are influenced by scalability (how much demand it can handle), availability (how often it fails), and overall responsiveness (how often the system kicks into action). For peak performance, developers adopt a wide range of design patterns, including very basic patterns such as autoscaling, autohealing, and load balancers to more specific patterns with very specific objectives such as Command Query Responsibility Segregation (CQRS) and event sourcing.

The performance of your app will also have a direct impact on running costs. Google Cloud uses a pay-per-use pricing system for most of its services and while using more resources may result in better performance, it's not the most efficient method. For one, Google Cloud does not allow for vertical scaling (running machine types cannot be changed for greater capacity, they can only be changed in quantity). Furthermore, there are numerous ways in which you can optimize your infrastructure and improve performance. For instance, using zones and clusters can decrease network latency. We'll talk about many other optimization tips throughout this book.

Tip

Thought bubble: In this context, the term clusters refers to a physical infrastructure unit within a data center. Zones refer to geographical locations – there are 28 regions and 85 zones. Being conscious of zones and clusters allows you to optimize your app for different parts of the world. As mentioned, decreased latency is one of the benefits but by using different zones in a region, you also ensure that if there is an outage or failure, your customers will still be able to use your service.

Deployments

Designing and implementing are two sides of the same coin and equally important – although, implementation is arguably more important, partly because implementation is underrated. The cloud-native architecture is all about speed and when working professionally, your customers will demand this speed in the form of more frequent deployments, reduced time-to-market, and a reduced update failure rate. Some common deployment challenges that you're likely going to face include automating and orchestrating different developmental and operational tasks, integrating different services and components together to complete the development pipeline (from Chapter 2, End-to-End Extensible Tooling for Cloud-Native Application Development), drafting and implementing policies, and more.

Thankfully, Google Cloud is architected around deployments and is full of tools, services, and design patterns that make deployment easier. We've already looked at two, very broad but also very important patterns for deployment, called Continuous Integration/Continuous Deployment (CI/CD) pipelines and DevOps. In the next section, we'll look at more deployment patterns.

Resiliency and availability

Resiliency is one of the core tenets of the cloud-native architecture, along with scalability, agility, being serverless, and loose-coupling. In other words, cloud-native applications are architected to be resilient with the help of microservices. These loosely-coupled applications help isolate failures and ensure system-wide failures do not occur. However, while the apps themselves are isolated and inherently resilient (to some extent), the infrastructure needs to be properly configured and the right mechanisms (such as graceful shutdown/restart mechanisms, termination signals such as SIGTERM, audit trails, and so on) need to be put into place to improve disaster recovery and response times.

Resiliency and availability are similar concepts but they're not the same. While resiliency is concerned with unexpected failures, availability is concerned with uptime. It is measured in uptime and, ideally, we want around-the-clock availability. However, under real-world conditions, availability is affected by a number of factors, including infrastructure problems, high system loads, under provisioning, bad code, and more. Availability can be maximized using data stores, proper resource provisioning, monitoring, load leveling, and so on.

Monitoring

Monitoring is crucial to maximizing resiliency and maintaining security. However, compared to on-premises solutions, monitoring on a cloud-native architecture has different challenges, mostly because on the cloud, you're usually sharing resources on a public cloud where you do not have all the administrator rights and access. At first glance, this might seem like a big hindrance to management and monitoring. But this isn't a design flaw and cloud vendors provide a number of monitoring tools and services that can be combined with the wide range of monitoring patterns such as the anti-corruption layer, log sinks, audit trails, sidecar, partner services as single panes of glass, and so on.

Security

Cloud computing platforms such as Google Cloud are used to power some of the largest B2C operations in the world. Needless to say, security is a core aspect of cloud computing, and therefore, out of the box, it's much safer than on-premises solutions. However, without the proper implementation, these security protocols won't help much. Furthermore, individuals with malicious intents are always trying to find vulnerabilities and so developers must regularly update their policies and systems to ensure no security vulnerabilities arise between updates. Security patterns and approaches help mitigate problems related to confidentiality, compliance, availability, and system integrity.

Cloud-native design patterns

Now that you have a clearer understanding of the common challenges of cloud-native app development, we can take a deeper look at cloud-native design patterns. Remember, there are hundreds of design patterns out there and this book cannot possibly detail every single one of them. So instead, we'll be focusing on some of the popular design patterns and the ones that are most relevant to you.

Microservices

This might seem redundant now that we're in the third chapter, but microservices are more than just basic criteria of cloud-native applications – they often form the basis of the solutions to a surprisingly large number of problems faced in cloud-native app development. In addition to keeping your microservices very loosely coupled and isolated, developers can follow a range of microservices principles and best practices that will be crucial in avoiding problems such as system-wide downtime, slow updates, lack of agility, and slow response times to disaster and failures. Furthermore, it's highly recommended that these principles and best practices be followed strictly right from the start as it becomes more difficult to introduce them in the later stages of the developmental cycle.

With this in mind, the following is a list of powerful principles and best practices that, when followed, can help you avoid some common but annoying problems.

Separate database schemas for separate services

One of the ways microservices tend to lose their effectiveness is when developers do not create separate database schemas for each service, which results in services becoming less independent, prone to failures, and tightly coupled. To prevent this, each service should talk to a separate schema.

Services should communicate through their public APIs

Designing services to communicate directly with each other, through backdoors, or through any means other than their public APIs will also result in the architecture becoming more tightly coupled. Ensuring that services communicate solely through their public APIs also reduces administrative overhead and architectural ambiguity.

Ensuring backward compatibility through comprehensive testing and API versioning

One of the ways to ensure that no update causes consumer-facing failures is that developers can make use of API versioning along with testing the update for backward compatibility. Other ways to achieve safe deployments include staged roll-outs, blue/green deployments, and canary releasing (we'll learn more about this in the coming section).

Running standardized tests using a single command

Testing is non-negotiable. A high-performing cloud-native application requires a lot of testing, especially with regular updates. To make the testing process easier and quicker, it's highly recommended that developers create a standardized way to run tests on development workstations with a single command.

Setting up comprehensive monitoring

Even a thoroughly tested update can develop problems over time. More importantly, cloud-native applications can lose efficiency and performance without proper monitoring. This is why it's important to take the time to set up the proper monitoring solutions to keep a close eye on application performance, service health, system availability, and so on. The monitoring solution will also help immensely in debugging.

Setting service-level objectives (SLOs)

Setting expectations is important but setting goals is even more important. This is why it's recommended that developers set Service-Level Objectives (SLOs) early on. SLOs can bring more clarity in terms of application resilience and also help find limitations.

Performing disaster recovery tests regularly

Finally, it's equally important to push your application to its limits and beyond in a controlled environment. As such, developers should perform disaster recovery tests regularly. Developers can use techniques such as controlled failure injection to understand how the system would react during a failure.

These practices have emerged after a lot of research and are a proven way to improve cloud-native app development. That said, implementing these ideas can still be challenging, especially to new cloud developers. One way to counter these challenges is to share the workload with a team. At the end of the day, what's important is that you learn and improve – whether you do it alone or with a team does not matter.

Strangler applications

Creating a strangler application is especially useful for developers who already have a legacy monolithic application that they want to migrate to a microservice architecture. In the strangler process, a monolithic application is broken into microservices, starting from the core parts of the code and working your way up to other features:

Figure: 3.1 – The strangler pattern

Figure: 3.1 – The strangler pattern

Over time, the monolith shrinks, and you incrementally develop a cloud-native application. Once most of the basic features have been ported from the monolith, developers can begin leveraging the cloud-native platform and its tools to add new functionality.

Decomposition patterns

Let's continue with the theme of migrating a monolith to a cloud-native application. There are a few ways in which you can break down an application into services, also known as decomposition patterns. For instance, you can decompose an application into services on the basis of business capability. By doing so, each resulting microservice will represent a specific app functionality, such as product catalog versus order management versus inventory management.

Alternatively, you can divide services on the basis of subdomains. Every business and its application can be classified into different subdomains that represent its core, supporting, and generic activities:

Figure 3.2 – Decomposition strategy

Figure 3.2 – Decomposition strategy

Another option is to decompose each service by team. If you're working with a large team or have a number of developers working with you, you can assign one microservice to one team. This is ideal if you'd like to promote a more laissez-faire form of management where each team has complete control over and responsibility for their microservice.

Event-driven patterns

If the developer is using a traditional request/response protocol for communicating between microservices, the network becomes increasingly difficult as the app continues to scale. Therefore, cloud-native platforms support and promote the use of an event-driven architecture that includes a communication pattern that scales up easily and is capable of Complex Event Processing (CEP).

The event-driven pattern works by using events to trigger certain actions – similar to the If this, then that principle. This is a very powerful design pattern as it helps remove the communication bottleneck and create complex chains of events and commands:

Figure 3.3 – Event-driven patterns

Figure 3.3 – Event-driven patterns

Next, we're going to discuss Command Query Responsibility Segregation (CQRS).

Command Query Responsibility Segregation

CQRS is a data management design pattern that separates the write logic (commands) and read logic in a data store. But what's the point of separating the command part from the query part? Well, CQRS allows you to scale both read and write workloads independently, which results in benefits such as improved scalability and performance as well as greater flexibility and control.

The saga pattern

Since microservices are isolated from each other, they are also responsible for their own data. However, there are many scenarios where an app needs to reliably share data between different microservices. This is where sagas come in. The saga pattern is another important data management pattern, like CQRS, that helps overcome the isolated nature of data in microservices while still being loosely coupled. Sagas work by creating a sequence of local transactions, where each transaction is triggered by the previous. If, however, the microservice is unable to share its data and the local transaction fails, sagas also act as a failsafe and take a compensating action to undo any changes made by the preceding transactions.

Multiple service instances

Multiple service instances is a scalability pattern under which developers run multiple instances of a microservice. The premise behind multiple service instances is horizontal scaling. On Google Cloud applications scale by increasing the number of virtual machines, not by increasing the capacity of one machine. This concept is extended to microservices as well, where instead of provisioning more resources to one instance of a microservice, we just run multiple instances on different virtual (or physical) hosts. We will refer to this as horizontal autoscaling, which will be discuss in Chapter 16, Orchestrating your Application with Google Kubernetes Engine.

The benefits of the multiple service instances pattern include the rapid deployment of a microservice as well as far more efficient scalability.

Canary deployments

Canary deployments, also known as limited rollouts, are a deployment pattern that helps avoid major failures by releasing updates to a limited number of users or servers. The limited release gives developers the ability to test the update without affecting the entire user base with potential bugs or glitches. Furthermore, a failed canary deployment is far easier and quicker to recover from than an entire system failure.

Stateless services

The stateless services pattern is a data management design pattern with performance and cost benefits. The difference between stateless and stateful services is that the former does not permanently store information such as preferences, user profiles, workflows, and other session data. Every time a stateless application runs, it does so from scratch without any reference to the previous instance. Any information that the stateless application needs to run is stored on persistence data stores and is retrieved on startup each time. Stateless services help eliminate the additional overhead of maintaining state and also speed up the startup process. You'll be working with stateless applications if you decide to build your application on Kubernetes Engine and Cloud Run.

Immutable infrastructure

Immutable infrastructure is also a broader pattern that can be thought of as an infrastructure paradigm or strategy. The premise behind immutable infrastructure is to increase resiliency by never updating or modifying deployed servers. In order to change server preferences, a new server is deployed after the changes are applied to a common image. This improves reliability and resilience by avoiding problems such as configuration drift and snowflake servers that could arise on regularly updating a server. Immutable infrastructure also reduces the risk of failure and if there is a failure, it has a simple recovery process (using the version-controlled image history).

Anti-corruption layer

Let's, once again, assume that you're migrating a monolith to a cloud-native application. How do you ensure that the legacy monolith's domain model won't pollute the domain model of a new service? By implementing an anti-corruption layer between these two different systems. It won't completely separate the two systems but will act as a semi-permeable barrier that translates requests from the old system to the new system.

You'll also find anti-corruption layers useful later in the development cycle when you need to connect subdomains with different semantics. Using the same principle (a semi-permeable adapter layer), you'll be able to transfer data and requests from one domain to another without being limited by the different dependencies used in the other domain.

API composition

The API composition pattern helps to implement complex queries that join data from multiple services. By using an API composer, developers can invoke multiple services that own the data and perform an in-memory join of the results before aggregating results for the customer. This is a simple way to implement complex queries in a microservice architecture but in-memory joins become inefficient when working with larger datasets.

Event sourcing

You might find yourself in a position where you need to update the database and send messages to the consumer reliably. How do you do that? One of the most common ways of doing this is through event sourcing. The event sourcing pattern persists the state of the data or series of events. The application can then reconstruct an entity's state by using the append-only store as a system of record. Event sourcing improves scalability and performance and is also reliable as it provides a complete audit log of the changes made to a business entity.

The Retry pattern

A cloud-native application performs flawlessly when the dozens or hundreds of microservices in it never miss a beat. However, in the real world, it's very difficult to guarantee this. Errors such as Out Of Memory, timeouts, server errors, and so on are common culprits. Furthermore, even consumer-side problems can cause a microservice to be unable to complete its task.

Since in many cases, the error is often short-term and can be fixed after a delay, an error-handling design pattern known as the Retry pattern is adopted. The Retry pattern transparently retries failed elements over a network a given number of times. The goal of this pattern is to give faults a chance to self-correct and automatically retry.

Circuit breaker pattern

The Retry pattern works only when the fault is short-term and self-correcting. However, not all faults will meet these criteria and in many cases, the fault might not be fixed irrespective of the number of retries.

Unfortunately, in many cases, the failure of one microservice can lead to other microservices being unable to collect necessary information, causing them to fail and in turn, starting a snowball effect.

In such cases, it's important to not waste CPU resources in retrying. Accept that a microservice has failed, and tell the rest of the application to continue operations without the failed microservice (in highly decoupled systems).

This is a popular design pattern known as the circuit breaker and is often deployed alongside the Retry pattern.

The bulkhead pattern

The hulls of ships are designed as separate compartments to reduce the risk of sinking even when the hull is breached, by isolating water into a few compartments. A similar design called the bulkhead pattern can be implemented in cloud-native applications to reduce the risk of an entire application stack failure when one or two microservices fail.

The bulkhead pattern premise here is to partition the microservices based on their demand and availability metrics in such a way that resource exhaustion or failure isn't enough to bring down the system.

Using the cloud-native pattern judiciously

A cloud-native pattern is not a silver bullet. Simply adopting an organizational pattern will not make you five times as efficient nor will security resolve all security problems forever. Remember, design patterns aren't just a combination of a problem and a solution – they also come with complications and consequences.

There are hundreds of cloud-native patterns, each with its own set of trade-offs, and it's not realistic to be aware of all of them as you begin writing your first cloud-native application. The best thing you can do right now is to keep an open mind about the solutions and designs you implement and be open to experimentation. A key difference between cloud-native applications and traditional monolithic applications is that the former is more forgiving when it comes to experimentation (especially during the later stages of the development life cycle). Use this opportunity to find out more about the various design patterns so you can develop a powerful understanding of which pattern to use for different scenarios. Furthermore, there is a wealth of information and technical documentation available on most of Google Cloud's solutions and how their performance and productivity can be maximized.

Hybrid and multi-cloud architecture recommendations

In addition to the design patterns we have just discussed, there are a few recommendations that developers can adopt to improve their hybrid and multi-cloud architecture.

Going forward, we'll call these patterns as they do give developers potential solutions to the unique problems of cloud-native architecture despite not being strictly design patterns.

Distributed deployment patterns

The following patterns are to be deployed when your application is going to be running in an environment that suits it best, with the goal of testing all of its features and various characteristics.

The tiered hybrid pattern

The tiered hybrid pattern suggests migrating the frontend of the application to the cloud before the backend while the backend stays in its original computing environment. The main premise is that because frontend applications are usually stateless, they are often easier to migrate. Additionally, since frontend applications are updated frequently and are subject to varying levels of traffic, things such as CI/CD pipelines and auto-scaling enabled by the cloud helps in reducing workload through automation:

Figure 3.4 – The tiered hybrid pattern

Figure 3.4 – The tiered hybrid pattern

However, going the tiered hybrid route isn't always recommended and it should be chosen on a case-by-case basis.

The partitioned multi-cloud pattern

In some cases, hosting your application on one cloud vendor (such as Google Cloud) may not be enough. For instance, developers may choose to host their application on two separate cloud platforms to test the differences and find out which is better for them. Additionally, developers may also come across region-based compliance issues on one platform that can be overcome by serving that region's traffic through a different platform:

Figure 3.5 – The partitioned multi-cloud pattern

Figure 3.5 – The partitioned multi-cloud pattern

Whatever your reason may be, the partitioned multi-cloud pattern enables you to run the same application on two separate vendors.

The analytics hybrid and multi-cloud pattern

Many businesses prefer to host large silos of data on their private backend systems (on-premises) but want to move the processing and analysis workloads to the cloud – essentially feeding data to the cloud:

Figure 3.6 – The analytics hybrid and multi-cloud pattern

Figure 3.6 – The analytics hybrid and multi-cloud pattern

This is possible with the analytics hybrid and multi-cloud pattern. Furthermore, Google Cloud in particular is well suited to such an arrangement as Google Cloud does not have a charge/fee for ingress traffic coming into Google Cloud. Additionally, Google Cloud also offers numerous managed services that make ETL pipelines easier to create and maintain. Developers can also use Cloud Storage to build data lakes on the cloud.

Edge hybrid

Edge hybrid is a pattern for running workloads at the edge, which reduces latency and dependency on a fast internet connection. For many businesses, a fast and reliable internet connection isn't guaranteed. In these instances, a power outage in the on-premises facility shouldn't affect the end user's experience.

There are numerous other cases where a fast and stable connection isn't always present and here, an edge hybrid setup allows developers to run time-critical workloads in the edge-computing environment while non-critical workloads such as administration and monitoring can run (usually asynchronously) on Google Cloud:

Figure 3.7 – The edge hybrid pattern

Figure 3.7 – The edge hybrid pattern

Additionally, as we mentioned in Chapter 2, End-to-End Extensible Tooling for Cloud-Native Application Development, Google Cloud has a range of services that can be used for such hybrid and multi-cloud setups including Anthos on bare metal, which provides a single-pane-of-glass-view that makes it easier to manage these partitions.

Redundant deployment patterns

The following patterns are for applications that will be deployed in different computing environments, with the goal of improving resilience in different scenarios.

The environment hybrid pattern

There are many cases where a business needs to (or simply prefers to) keep the production environment on-premises while moving other environments including various testing environments to the public cloud due to the following:

  • Legal and compliance reasons
  • Using third-party services that cannot be used on public clouds
  • Simply preferring to keep the production environment in its existing data center:
Figure 3.8 – The environment hybrid pattern

Figure 3.8 – The environment hybrid pattern

The environment hybrid pattern allows developers to easily create and tear down such partitions. This can also be used as an opportunity to get familiar with the cloud without risking the production environment.

Business continuity – hybrid and multi-cloud

Despite the increased resilience of cloud-native applications, disaster recovery plans are still crucial for cloud-native applications. One of the ways cloud-native applications can continue working in spite of major disasters is by replicating services and hosting them over geographically different data centers – eliminating the risk posed by single points of failure:

Figure 3.9 – The business continuity pattern

Figure 3.9 – The business continuity pattern

The business continuity pattern allows businesses to switch to a cloud-based disaster recovery environment that can take over in case of any disaster and ensure users still have access to the service. Additionally, the pay-per-use model ensures that businesses pay for storage and compute only when the VMs are running.

The cloud bursting pattern

The cloud bursting pattern allows developers to manage patchy traffic with high peaks (such as traffic on Black Friday and Cyber Monday) by using a separate cloud environment that kicks in when the base, on-premises production environment is overwhelmed with traffic. In theory, the cloud environment forms an outer layer that can contain temporary, albeit high-volume, traffic:

Figure 3.10 – The cloud bursting pattern

Figure 3.10 – The cloud bursting pattern

Cloud bursting is a great option as it allows developers to do the following:

  • Use the same resources on the cloud and in on-premises environments.
  • Not worry about overprovisioning resources.
  • Ensure that the application has the capacity to serve customers in a timely manner without breaking the bank.

With this, we have completed learning about cloud-native architectural patterns.

Summary

Cloud-native design patterns were made to help you anticipate and prevent common challenges in cloud-native app development. By having a strong understanding of these patterns, you can save a significant amount of time and effort by avoiding problems that are often faced by new developers. In a way, these patterns can be thought of as shortcuts but as with all shortcuts, they must be taken with caution. Design patterns aren't meant to be perfect; they're meant to be quick and widely applicable. As a result, they often come with certain drawbacks that, depending on your project, may not affect you at all or create more problems down the line. Therefore, it's very important to not just understand what problem a pattern is solving but also at what cost.

Armed with this knowledge, you are now ready to make your first big decision – choosing the compute option. Google Cloud offers a wide range of compute options and in the next chapter, we will discuss these options and how you can choose the best place to run your application.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset