Chapter 1. Microservices

Adopting microservice architecture, whether from the ground up or by splitting an existing monolithic application into independently developed and deployed microservices, solves these problems. With microservice architecture, an application can easily be scaled both horizontally and vertically, developer productivity and velocity increase dramatically, and old technologies can easily be swapped out for the newest ones.

Fowler, Susan J. Production-Ready Microservices

In the last chapter, we talked about the pain of distributed systems. Microservices seek to ease that pain by providing a structure and set of best practices to make sure that the development of your application will scale. you may be thinking: Why am I concerned with the scalability of the development of my project? Scalability has always been a pain point for applications and organizations that have the ambition or the need to grow past a single team of developers.

There are many definitions of microservices. I think Sam Newman described it best in Building Microservices: Microservices are “small, autonomous services that work together”. It is an evolution of Services Oriented Architecture to fit the way organizations are actually structured. If a service can no longer be developed or maintained by a single team of developers, it is too big in the eyes of microservices. How big should that team be? That is up to your organizational structure.

In some ways, microservices are a bottom up revolution in software engineering. They are a fight that was waged by the masses and have seen a critical mass of adoption. People joined this war for the increased autonomy of choosing their own implementation details and the decreased friction of developing tightly coupled systems such as monoliths. While microservices are eating the world, that does not mean you will need them specifically. We will end this chapter with a look into when to build in a monolithic way, when to build a services oriented architecture such as microservices, and when to build in a monolith that can become services later.

Why do you want to use Microservices

Speed and Safety at Scale and in Harmony.

The Microservices Way, Authors

When building your application in serverless, you will have to choose the architectural patterns and practices that will allow your application to be resilient and, more importantly scale. The important part about scaling may not be how you handle the additional load of users hammering your site in production. It may be how your engineering organization scales, making it increasingly difficult to build and grow the code base to allow for the same high velocity in delivering enhancements, updates, and entire new features and product offerings.

As I have mentioned, microservices are intended to solve the most difficult scaling challenge ever: people. People don’t scale automatically. It is simple for one person to build an application. They know all of the business logic and implementation details of the entire application. They know every trade-off and decision made to get a project across the line. However, one engineer can do only so much, inherently limiting what their organization can accomplish. A small team can increase the output without adding too much additional friction. But once you add another team, communication and coordination become much more complex slowing down development velocity. A good workaround might be that, instead of having two teams develop one application, you have them develop two components that become an application when combined. Since every application must interact with other software, whether it be an API or the instruction set of a CPU, it feels very natural to develop software in this way. By giving each team a tiny but independent part of an application, you can scale up the people part of the equation, and scaling an engineering organization is the main reason to build things in the way of microservices. This is the core strength of microservices, so let’s dive right into it.

Improved Developer Velocity

The giant, monolithic “bookstore” application and giant database that we used to power Amazon.com limited our speed and agility. Whenever we wanted to add a new feature or product for our customers, like video streaming, we had to edit and rewrite vast amounts of code on an application that we’d designed specifically for our first product—the bookstore. This was a long, unwieldy process requiring complicated coordination, and it limited our ability to innovate fast and at scale.

Werner Vogels, CTO Amazon.com

This quote comes from a blog post “Modern applications at AWS”1 on the personal blog of the CTO of Amazon.com. In it, he details how in 1998, Amazon.com decided to reinvent the process with which it innovates. “Invent, launch, reinvent, relaunch, start over, rinse, repeat, again and again” is the iterative process Jeff Bezos, the CEO of Amazon.com cited in his 2018 Annual Letter to shareholders.2 This process was not occurring fast enough on the technical side. “Most companies, like Amazon, start their business with a monolithic application because it’s the fastest, easiest system to develop.” Vogels writes. But at some point, they hit a wall: these monoliths can no longer accommodate a simple addition, becoming fragile in their larger size. Amazon had to change all sorts of deep dark corners of their codebase just to add something new and useful to customers.

This is the main reason teams and organizations turn to microservices: because of how systems come to represent the organizations that build them. This is the basis of Conway’s Law3 and it is just as true now as it was when Melvin Conway stated his theory in 1967.

So what has changed since 1967? Systems and their organizational structures have formed a symbiotic relationship where they adapt to each other to build something stronger rather than just the software representing the company’s communication system. Once an organization reaches a certain size, it can be argued that it should adopt not only microservices, but the principles behind microservices. Teams that are independent, loosely coupled, cohesive, and empowered with authority over a specific set of responsibilities are vastly more productive and happier before they even design their services to mirror that organizational structure. The increase in productivity is a direct result of the ability to operate autonomously and make decisions without being blocked by other teams, the reduced scope and complexity of the component being developed, and the specialized and deep understanding of the business responsibilities of each team.

Increased Developer Freedom

When each component of an application becomes a mini application in itself, it can make more bold architectural choices for the problems that it must solve, and the version of the truth it is responsible for. When using microservices, each component does not have to be written using the same programming language. Some languages are better for certain kinds of workloads. Sometimes a killer library makes all the heavy lifting trivial. Developers are the most effective and at their happiest when they have autonomy to do their jobs. If you are a CTO reading this book, make sure that any new language usage is supported by a critical mass of engineers who can be effective in that language in the case of an incident.

In the same way that you can pick the most optimal language for each microservice, you can choose the most optimal database. Each microservice that has state will have to have some form of data store. This could be a virtualized drive (some form of block storage), blob storage in a bucket, or a database. Vogels espouses the use of purpose-driven databases. Need transactions? PostgreSQL might be the first to come to mind. Is the schema not as important as just storing data against a given identity? NoSQL might give you the flexibility you are looking for.

Issues with microservices

Microservices create a new problem for every problem solved. This may seem like a bad thing, but it is just the natural phenomenon of incurring trade-offs for each decision made when building your application. So why do people incur the costs we are about to cover? At some point, developing, supporting and running a monolith will no longer allow for any agility in an organization. Each new business decision may involve undoing the implementation of previous decisions. By contrast, in a microservices architecture, when a component no longer represents the business interest of the organization it can be swiftly changed in-place without affecting other parts.

Some of these issues come for free once your organization has scaled past having multiple teams of developers working on the same user facing application. It is impossible for one person to fully understand every detail of how GitHub works down to the implementation level, even though you can find open source alternatives written by a single person. There is a difference in scale that requires more complexity in the organization that application belongs to, and that structure, at least according to Conway’s Law, will dictate how the system itself looks.

Even without tying yourself or your team to the specifics of microservices, based on my rules or anyone else’s, if your organization is going to get big it is going to need some form of services. Here we’ll look at some of the challenges that will be faced in that quest. This area is still rapidly evolving, so be on the lookout for tools that can help solve these pain-points, but your results may be best if you embrace the chaos of reality and build into it, knowing that it is the environment that you target.

Increased Complexity

The more unique and independent each service is, the more complicated it is to maintain. Keep in mind that every active microservice needs to be actively owned by a team. The external world is not a constant, and there will need to be an owner to handle maintenance or defects that arise during the lifecycle of each service. In addition to increased independence, those teams may speak different programming languages.

Putting aside the issue of the services being written in a variety of programming languages, and developers having varying experience with those languages: a single request by a user may be transformed through, handed off by, or fork into multiple processes in a number of different services until its natural completion. This natural completion may only occur when the user deletes their account. It can be difficult to track this task in a cohesive way. There are tools that help with this, but it can still be quite a cognitive challenge, since no one developer will fully know or understand the task’s path through the other microservices.

Proper Devops Practices and Resources Needed

The complexity of having many different little applications increases if each deployment pipeline is unique. Imagine if one service was deployed on a cluster of Raspberry Pi servers in the microkitchen instead of in the cloud. This is an example of when implementation details leak and cause issues in production for others. While one team may have thought this was a clever way to solve a problem, an engineer paged in the middle of the night may not even know that this cabinet cluster even exists. While the implementation for each step may be unique, companies must ensure that all the steps, tools, and practices for production are the same, even though the microservices are owned by different teams.

At a certain size, your organization will need its own platform engineering team. In Production Ready Microservices Fowler mentions the need for a “microservices ecosystem”: “A successful, scalable microservice ecosystem requires that a stable and sophisticated infrastructure be in place”. At first, if you only have one team doing all of the backend work, they will need to deal with the overhead of managing the infrastructure and engineering the platform that your microservices will target. Remember, the developer experience is part of the shift to microservices; without it, you will be missing the main benefit of this choice. You may want to defer this, as we will later discuss, until your organization has reached the scale that it can have a full time team dedicated to how engineers ship their code to production. Standardizing the pipelines and interfaces is just a part of this. But it is a significant overhead that will only benefit your organization if it has the scale to need it, or at least to make it worth the growing pains of getting there.

Challenges with Local Development and Testing

It can be much more difficult to develop a microservice when the other microservices it will interact with and rely on are separate entities with other dependencies. Each service being developed can not be exhaustively tested in the vacuum of running a single service locally. Depending on the scale of your application, you may be able to run the entire constellation of microservices locally, but that will not always be the case, especially when you decide to build in managed services from your cloud provider. Some cloud providers have ways to run versions of these managed services locally, such as DynamoDB, which has a development version that can be run locally; others have community reproduced services that can be run locally for the purpose of development. Some managed services are just hosted versions of open source software that you can run locally. Otherwise you are bound to make separate resources in the cloud to develop against, as the cost of these pay per use services scale down towards zero drastically, but you will be reliant on internet connectivity.

Much like the interconnectedness of dependencies for local development, this same issue can persist in testing. But another one exists as well: If your organization has one quality assurance (QA) environment, what happens when one of the test versions of the microservices is broken? Some other services may have issues testing their latest code against it.

You have to rely more on the independence of each application and that each of these teams has tests to prevent regressions from occurring. But you must also use integration tests to make sure your applications will work in harmony in the real world. Also, you need to have safety mechanisms, such as canaries or other gradual roll outs, so that if the new version of your service breaks some other services, it will be automatically detected, rejected, and rolled back. This functionality may be built into certain container orchestrators, such as Kubernetes, but it is not currently built into any of the cloud provider’s serverless platforms. They do however expose enough control over which version is running to follow these best practices. You might want to consider monitoring the health of newly deployed function code without the need for human observation. Remember, in the land of the cloud, automation is king!

It is better to catch these issues before they ever hit production. Writing end to end tests becomes critical in the production stability of your app. This is a topic we will discuss later in the book, but some teams are turning to options like Mountebank4 that allow each team to ship a fake version of their service for utilizing in test suites.

How do you use microservices effectively?

Microservices help us embody one form of operating rules when developing a distributed system. It can help provide more clarity about the “what” and “how” of solving the pain points of the previous chapter. Sure, it won’t help you deal with inaccuracy in clocks, but it will provide more detail for how loosely coupled services should communicate. There are organizations running thousands of microservices, and they are only able to do so with strict adherence to best practices.

With microservices, consistency is key. You are taming the beast with a significantly more complicated architecture, as an investment into your entire organization’s productivity. But without this consistency and consensus, you’ll have a bad time. With strong patterns and practices, you will have plenty of extra time to read all those microservices horror stories on the internet, because of all of the time and productivity you gain.

Consistent Interfaces

Microservices in your organization should all use the same type of interface with consistent rules about how they make information available and how they are consumed.

Keeping in mind that the main feature of microservices is that independent teams own their own destinies, this independence can only be maintained by having common rules and practices. Otherwise the services will not be able to reliably interact and depend on each other, and instead of an application, you will be left with a house of cards. Be careful however not to prescribe specific technologies that will counteract this independence. Think about how hard it would be to travel between countries if each and every country had their own idea of what a passport should be like. Imagine walking up to an immigration counter after a 16 hour flight to be told that your passport can’t be accepted because the picture has to be on the right side in certain countries and yours is on the left. Or worse, imagine having two passports for this very reason and leaving one at home! (See chapter 5 for an in-depth discussion about interfacing with other services.)

Loosely Coupled

As discussed in chapter 2, you should be able to make a change to one system without having to make a change to another. This is what enables the high velocity of change, one of the main reasons to use microservices.

Keep the glue between your application technologically agnostic. Do not allow the introduction of consistent patterns to dictate or limit the technological choices a team can make. Your components will share interfaces, but other than that, they should keep all details to themselves, especially implementation details. That way other components can never become reliant on those details. While the teams developing components should be encouraged to share this information with other teams and their organizations, the components themselves should be blissfully ignorant. Your system does not need to know how Stripe or Twilio work, but you as a developer may need to understand this in order to better interface with them or choose to use them in the first place.

In this spirit, never allow two services to share the same database. Sharing a database allows consumers of your service to circumvent your logic and tie themselves directly to the implementation details. A change in the database can break these other consumers, which means you no longer have the freedom to change databases when the evolution of your service calls for it. They should never be able to access that data directly in the first place. If for some reason you have to share a common datastore between services, make sure the services have different database users that can only see the tables they should be able to see (also a security best practice).

Microservices must be independently deployable. That is the only logical conclusion when the point of microservices is to be independently designed, developed, and tested. If you cannot deploy changes to one component without having to bundle it with other components, then they are not loosely coupled. One main focus of the modern adaption of microservices is being able to move fast without breaking things. This is a critical component to achieve that goal. When utilizing the serverless framework, you can deploy changes to individual functions.

In the serverless world, some of the implementation details for interfaces may already be chosen for you. You may have to wrap or adapt the interfaces to match the standards of your organization, or build your own on top of the default offerings. As an example, imagine either a homegrown or third party tool that helps you trace your workloads as they pass through different components of your application. In order to facilitate this, a request or trace_id must accompany all invocations or tasks. This can be a part of your standard defined interface, and lambdas being invoked can refuse to work if the request does not meet the standards of your organization. This may result in failed workloads, but only enforcing these standards will empower you with the increased velocity of development: in this case, the ability to trace a workload through the code and process of many different teams and workflows. With a common set of rules for how services communicate, you can maintain your independence and autonomy regarding the implementation details.

How Micro is a Microservice?

Here is a misconception: A microservice has to have a clear and defined size and scope to qualify as a microservice. The truth is, there is no “one size fits all” in designing your services, nor is there a common metric to measure them. The size of your services should instead be a factor of the size of your teams, project, and organization; the opinions of the people involved; and, most importantly, entropy.

Let’s think about building an accounts service. Sounds easy, boring and perhaps even a bit of a solved problem. But when a team decides to start carving up a monolith, identity, authentication, and authorization go from being built-in features of the application framework to problems you have to solve. Let’s pretend we have perfectly designed and architected this accounts service. Where should a user’s physical mailing address live? In the shipping service or the location service? This will vary depending on your application. If you send packages from a warehouse to a delivery carrier, the location of your customers is not of much concern. But if you decide to start making deliveries, understanding the physical location of where an item needs to go becomes a lot more relevant. A delivery service would likely rely on both the shipping and location services to get a deliveries into the hands of your users. Thoughtful design must go into how you delineate your services and separate concerns regardless of their sizes. Let’s dig a little deeper before we pull up.

Assume now that your organization has jumped into a new line of business and needs to shift to microservices. The new line of business is so vastly different that a user of one product is not automatically a user of all products. Maybe they have different terms of service that must be accepted, or require different information to register. Maybe you have two sides of a marketplace to service. For example, on a ride-sharing platform, not all passengers are drivers, and vice versa. Should your accounts service be the one to mark the distinction? Should your accounts service know the driver’s license details of accounts that are registered as drivers?

The quick answer is that if you are fully committed to the paradigm of microservices, your accounts service should just handle authentication. A separate profiles service can handle user information, and another can handle the authorization if a user can see the driver dashboard. If this sounds too hectic for a small backend engineering team, it very well might be, so let’s talk about monoliths.

Choosing between monoliths and microservices

A monolithic application is one where all of the logic and components of the application live in one deployable package. If your entire application is living in one project in a web framework such as Django or Rails, it is likely a monolith. These technologies are not incompatible with microservices. But, generally speaking, there are other frameworks meant for developing smaller components that were inspired by these projects,such as Flask, Sinatra or Express that would be more appropriate for a microservice.

When developing a monolith, things move quickly at first. Feature after feature gets added on, and even small changes seem to be quickly applied. But as each component becomes highly interdependent development slows down. Making what used to be a simple change becomes increasingly complicated because different components of the application have become intertwined and tightly coupled. You can’t make a change in the target component without making seemingly unrelated changes in components tightly coupled to it. This coupling is normally what happens when an individual person or a small team works on a system that is simple enough to keep a fully accurate mental model in the brain of one developer, and as frameworks enforce sharing of unrelated business logic across common entities. This is not a guarantee that all monoliths are complete messes. It helps if you can separate out long-running tasks and background jobs, either using something like Celery to help you easily defer the execution of these tasks, or by directly placing tasks into a queue. It may also make sense to build certain components that seem highly independent to avoid the future need to split them out.

When Should You Use a Monolith?

If you anticipate the size of your development team to stay under 15 people for the next 5 years, and expect to have less than 10 million active users, you may want to keep things easy by sticking with the monolith. I will discuss later in this section how to design your monolith for future separation into microservices, which give you some of the advantages of microservices without any of the downsides.

Can I Use Serverless With a Monolith?

Yes, you can. And that might be a wonderful or terrible decision, depending on what you are doing. There are two kinds of serverless adoption for application logic. Some are shipping their monolith to a function as a service (FaaS) to avoid managing servers. This is a monolith. Others are deploying collections of functions. This is more of the focus of this book, however we will not leave the serverless monoliths in the dark.

No matter what you do, you will want to follow the principles we covered in the previous chapter to reduce the friction of running a successful application. If you do not expect your engineering organization to scale past the infamous “two pizza”5 teams at Amazon, a monolith might be the right answer for you. You can still expose the monolith as different functions in your serverless deployment so you can have fine grained controls and introspection over each clearly separate part serving up your users requests. But let’s look at another way to start off on the simple and easy route, while preparing for hyper-growth.

Perforating Your Monolith For Easy Separation In the Future

I was interviewing for a small startup project when the CTO brought up services with reliability around a critical function of a system that was not directly user facing, at a scale of a magnitude larger than the rest of the monolith. He wanted to know how I would carve this up and design a service to handle this scaling issue. My short answer? I would not. Their application code was error-prone; with the most compute and least visibility causing all sorts of issues to the core business. I suggested that instead of having bad code talk to bad code directly, adding a TCP connection between the bad code would just make the issue worse. The code itself had accumulated too much technical debt and needed to be addressed directly. So how does this relate to microservices?

My recommendation took all of the best of microservices while avoiding the downsides. The functionality would be rewritten as if it were a microservice. There would be a clear separation of concerns and a well defined and specified contract between the two components; as a cherry on top, it would be perforated for future separation when it would inevitably be required to be split out of the monolith. In this particular case, the functionality they expected to be turned into a microservice would be instead turned into a library. This library would have its own robust test case, its own versioning, and most importantly a clearly defined interface meant to be used as if it were any other network accessible API. In this pattern, the library was designed to never raise and exception instead always to return a response, even in an error case. They hired me, and I completed the refactor in a matter of weeks. The core of this library wound up clocking in at around 200 lines of code, and despite being one of the most exercised code paths, it has still not been modified years later. This is a microservices-style win without any of the downsides of moving to microservices prematurely.

What are the lessons of this story?

  1. Monoliths can be a collection of services waiting to be broken up.

  2. Microservices best practices are based on engineering best practices. Learn, embody, encourage and adopt best services at all costs. You can break the rules, but you should use best practices when deciding to do so.

  3. When making important architectural decisions, you can’t only rely on the advice or opinions of others, myself included. Seek the information and experience of others, but make your own best decisions when it comes to the implementation details.

You can build your monolith with the patterns of microservices but without their plumbing and overhead. This works well if you are trying to build a greenfield concept and get it to market as quickly as possible, but you want to avoid the later pitfalls of a monolith. This is a certified best practice that you should share with all of your friends. Here is how it works. Take all of the principles espoused in this book: Clean separation of concerns, loosely coupled and highly independent services, and consistent interfaces. Keep these in the same monolithic app and never compromise on these rules. The result will be a monolith that is baked to perfection and ready to be carved up later.

You can even take this further by having your monolith operate in different modes. A common but often overlooked example, is wrapping a long-running task so that it can be called directly, but deferring its execution by a task server to later. If you are using Python, this is usually done with Celery, but the practice works the same regardless. These long running tasks live in the same monolithic application code as your tasks that are directly user facing, but they will never be run by those servers. Instead they are run by containers, servers designated as task servers, or, now, functions. One monolith, two different modes of operation. True, it won’t be free or automatic to break this up for the purpose of scaling or to help a growing engineering organization, but it will be straightforward and predictable if you follow the principles of effective microservices architecture from the start.

The beautiful part of this is that you are designing with the best practices needed for highly distributed systems and microservices, but instead of dealing with all of the pain of distributed systems, you get the simple operation of a monolith, until your organization grows and a monolith no longer supports its needs.

When do you want to use Microservices?

By now, you may be able to answer this question. If you are starting a greenfield project, the hybrid or pre-perforated approach might be best. If you are building an eCommerce site, you may want to build with a monolith. But if you imagine that one day you will have an entire team, or teams, of engineers dedicated to a single component such as a shopping cart, then you want microservices and may want to incur the costs of developing them while external demand is low. Furthermore, if you expect that team to have something to do every sprint in terms of improving or maintaining that component, then it only makes sense to avoid paying the switching costs later.

Keep in mind that teams-to-microservices does not have to be a strict one-to-one mapping. The shopping cart on Amazon has likely scaled to the complexity that it may need more than one team, or more than one service. The inverse may be true as well: Your organization may have a team focused on the “check-out” experience that owns multiple services, including the shopping cart. It is important to balance workload and team size. Again, the goal here is to model your system on how your organization works.

Conclusion

Regardless of the size your organization will grow to, even if that will only ever be you, make sure to follow the principles of well crafted services: loosely coupled and preferably independent components and consistency in rules, practices and interfaces. Empower your developers to act in loosely coupled, independent, autonomous, yet cohesive teams to maximize the resilience of not just your application but your organization. Don’t forget the inverse of Conway’s Law: design your teams as you would your application. You can’t scale if your servers keep failing and especially if your engineers keep leaving, and usually one leads to the other.

1 https://www.allthingsdistributed.com/2019/08/modern-applications-at-aws.html

2 https://blog.aboutamazon.com/company-news/2018-letter-to-shareholders

3 organizations which design systems … are constrained to produce designs which are copies of the communication structures of these organizations. Melvin Conway, 1967

4 http://www.mbtest.org

5 Jeff Bezos is famous for declaring the correct team size as that which can be fed by two pizzas. Any larger than that, and they are too busy deciding instead of doing.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset