Identifying candidates within the monolith
This chapter introduces how to identify candidates within the monolith applications for microservice architecture and practices. It also describes how to design and refactor the new components.
This chapter includes the following sections:
3.1 Identifying candidates
Candidates for the evolution to microservices architecture are monolith applications with components that cause any of these situations:
You are unable to deploy your application fast enough to meet requirements because of difficulty maintaining, modifying, and becoming productive quickly, which results in long cycles of time-to-market to roll out new services. Perhaps the business requires a new function to take advantage of a new market opportunity, or you want to link into a new social media service. In either case, you are unable to build, test, and deploy your application in a timely manner.
You are unable to apply a single deployment. Usually it is necessary to involve other modules or components to build, test, and deploy, even for a small change or enhancement, because there is not separation of modules and components inside the same package of .ear or .war files.
Only a single choice of technology exists and you cannot take advantage of new technologies, libraries, or techniques that are being adopted by other enterprises. This can manifest itself in several ways. Perhaps your current technology stack does not support the functionality you need. For example, you want to generate documentation automatically from your code, or you want to link to a new service, but the current technology used in your system or platform does not offer these features.
Your large systems have the following characteristics:
 – High amount of data in memory
 – High-use CPU operations
 – Unable to scale a portion of the application; usually the entire application must be scaled
 – Cannot be easily updated and maintained
 – Code dependencies are difficult to manage
Onboarding new developers is difficult because of the complexity and large size of the code.
3.2 Considerations for moving to microservices
Consider the following important aspects before you move to microservices.
Flexibility for the development platform
Microservices offer the flexibility to choose the correct tool for the job; the idea is that each microservice can be backed by a different technology (language and data store). Different parts of a single system might be better served with differing data storage technologies, depending on the needs of that particular service. Also, similarly, a microservice can use different languages; this enables you to choose the preferred technology for a particular problem rather than following the standard.
Design by functionality and responsibilities
Microservices offer separation of components and responsibilities: each microservice is responsible for a determinate topic. In this way, you can apply new features or improvements in specific parts of the system, avoiding mistakes and avoiding damaging a part of the system that is working. This approach also can be used for isolating older functionality; new functionality can be added to the monolith system by adding a microservice.
Easily rewrite complete services
Usually microservices are small services that are easy to rewrite, maintain, and change, providing a strong encapsulation model and a safe way to refactor and rewrite.
Flexibility for changes
With microservices, you can defer decisions and have flexibility in the growth of your architecture as your understanding of the system and domain increases. You can defer difficult decisions, such as decisions about data storage technology, until they are absolutely required. For example, the only function that a service provides is a thin domain-aware wrapping over a data store, but the most important function to get correct is the interface with which your other services interact.
Driving business value
Business owners see the benefit of microservices because they want their teams to be able to respond rapidly to new customer and market needs. With the monolithic application development approach, IT response is slow. Microservices are more aligned to business because they allow for frequent and faster delivery times than monolithic services. Microservices allow business owners to get feedback quickly and adjust their investments accordingly.
Others benefits are as follows:
 – Smaller focused teams enable business owners to easily manage resources more effectively, for example, moving resources from low-impact business areas to higher-impact areas.
 – Scaling an individual microservice for removing bottlenecks enables a smoother user experience.
 – Identifying and eliminating duplicate services reduces development costs.
Flexibility regarding scalability
With microservices, different parts of the system can be scaled; each microservice is responsible for specific functionality, which results in more flexible scalability.
Security zoning
Security architects insist on a layered approach to building a system to avoid the risk of having important code running on web-facing servers. Microservices can provide zoning that is analogous to the traditional layered approach. Business logic and critical data storage can be separated from the services that provide HTML rendering. Communication between the individual microservices can be firewalled, encrypted, and secured in other ways.
Team skills
With microservices, a team can be grouped by skills or location without the associated risks or difficulties involved with having separate teams working on the same code base.
3.3 Decomposing monolith application into microservices
This section describes techniques for decomposing a monolith application into a microservice.
3.3.1 Designing microservices
One of the great advantages of microservices architecture is the freedom to make decisions about technology stack and microservice size on a per-service basis. This freedom exists because microservices architecture starts by having a clear understanding of the user’s experience.
Use design thinking to scope and identify microservices
Design thinking is a process for envisioning the whole user experience. Rather than focusing on a feature, microservices instead focus on the user experience (that is, what the users are thinking, doing, and feeling as they have the experience). Design thinking is helpful for scoping work to usable and releasable units of function. Designs help to functionally decompose and identify microservices more easily. Design thinking includes the following concepts:
Hills
Hills Playback
Scenario
User story
Epics
Sponsor users
Identifying microservices opportunities
Hills
Hills are statements that provide a business goal for your release timeline. A Hill defines who, what, and how you want to accomplish the goal. A team typically identifies three Hills per project and a technical foundation. Hills provide the commander’s intent to allow teams to use their own discretion on how to interpret and implement a solution that provides for a smooth user experience. The Hills definition must be targeted at a user, and it must be measurable. Avoid using vague or non-quantifiable adjectives when using Hills. Example 3-1 shows a sample Hill.
Example 3-1 Sample Hill
Allow a developer to learn about iOS Bluemix Solution and deploy an application within 10 minutes.
Hills Playback
Hills Playback provides a summary of what the team intends to target. Hills Playback sets the scope for a specific release time period. Playback Zero is when the team has completed sizings and commits to the business sponsors regarding the outcomes that it wants to achieve. Playbacks are performed weekly with participation from the cross-functional teams and the sponsor users who try to perform the Hills. Figure 3-1 on page 39 shows a Hills Playback timeline.
Figure 3-1 Hills Playback timeline
Scenario
A scenario is a single workflow through an experience, and it sets the story and context used in Hills Playbacks. Large scenarios can be further decomposed into scenes and product user stories. Scenarios capture the “as is” scenario and the “to be improved” scenario.
User story
A user story is a self-contained, codeable requirement that can be developed in one or two days. The user story is expressed in terms of the user experience, for example, “As a developer, I want to find samples and quickly deploy the samples on Bluemix so I can try them.”
Epics
Epics group stories into a form that can be reused multiple times across the scenario so that stories are not repeated.
Sponsor users
Sponsor users are users who are engaged throughout the project to represent target personas for the project. They are expected to lead or participate in Playbacks. Sponsor users can be clients who use the applications that include microservices. They can also be internal developers who use microservice onboarding tools that you are developing in support of your DevOps release process.
Identifying microservices opportunities
For each design, identify the opportunities for potential reuse of a service across other designs. The Hill definition described in “Hills” on page 38 is targeted at an iOS solution with Bluemix.
Based on the Hill’s example, you have the following requirements:
Ability to deploy to Bluemix and other websites
Ability to deploy to Bluemix as a separate microservice
Ability to deploy to a Bluemix microservice team
The Deploy to Bluemix microservice team is responsible for implementing the service. The team is also responsible for performing functional and system integration testing of the stories in which that microservice is used. The service includes logging data to collect usage information. This helps to better quantify the microservices effect on the business. It provides visibility into the most popular applications being deployed and into user acquisition after users deploy a sample.
3.3.2 Choosing the implementation stack
Because microservices systems consist of individual services running as separate processes, the expectation is that any competent technology that is capable of supporting communication or messaging protocols works. The communications protocols might be, for example, HTTP and REST; the messaging protocols might be MQ Telemetry Transport (MQTT) or Advanced Message Queuing Protocol (AMQP). You must consider several aspects, though, when choosing the implementation stack:
Synchronous versus asynchronous
Classic stacks, such as Java Platform Enterprise Edition (Java EE), work by synchronous blocking on network requests. As a result, they must run in separate threads to be able to handle multiple concurrent requests.
Asynchronous stacks, such Java Message Server (Java EE JMS) handle requests using an event loop that is often single-threaded, yet can process many more requests when handling them requires downstream input and output (I/O) operations.
I/O versus processor (CPU) bound
Solutions such as Node.js work well for microservices that predominantly deal with I/O operations. Node.js provides for waiting for I/O requests to complete without holding up entire threads. However, because the execution of the request is performed in the event loop, complex computations adversely affect the ability of the dispatcher to handle other requests. If a microservice is performing long-running operations, doing one of the following actions is preferred:
 – Offload long-running operations to a set of workers that are written in a stack that is best suited for CPU-intensive work (for example, Java, Go, or C).
 – Implement the entire service in a stack capable of multithreading.
Memory and CPU requirements
Microservices are always expressed in plural because you run several of them, not only one. Each microservice is further scaled by running multiple instances of it. There are many processes to handle, and memory and CPU requirements are an important consideration when assessing the cost of operation of the entire system. Traditional Java EE stacks are less suitable for microservices from this point of view because they are optimized for running a single application container, not a multitude of containers. However, Java EE stacks, such as IBM WebSphere Liberty, mitigate this problem. Again, stacks such as Node.js and Go are a go-to technology because they are more lightweight and require less memory and CPU power per instance.
In most systems, developers use Node.js microservices for serving web pages because of the affinity of Node.js with the client-side JavaScript running in the browser. Developers use a CPU-friendly platform (Java or Go) to run back-end services and reuse existing system libraries and toolkits. However, there is always the possibility of trying a new stack in a new microservice without dragging the remainder of the system through costly rework.
3.3.3 Sizing the microservices
One of the most frustrating and imprecise tasks when designing a microservices system is deciding on the number and size of individual microservices. There is no strict rule regarding the optimal size, but some practices have been proven in real-world systems.
The following techniques can be used alone or in combination:
Number of files
You can gauge the size of a microservice in a system by the number of files it consists of. This is imprecise, but at some point you might want to break up a microservice that is physically too large. Large services are difficult to work with, difficult to deploy, and take longer to start and stop. However, be careful not to make them too small. When microservices are too small, frequently referred to nanoservices as an anti-pattern, the resource cost of deploying and operating such a service overshadows its utility. Although microservices are often compared to the UNIX design ethos (that is, do one thing and do it correctly), it is best to start with larger services. You can always split one service into two later.
Too many responsibilities
A service that is responsible simultaneously for different subjects might need to be split up because usually it is difficult to test, maintain, and deploy. Even if all of the responsibilities are of the same type (for example, REST endpoints), there might be too many responsibilities for a single service to handle.
Service type
A good rule is that a microservice does only one thing, for example, one of the following tasks:
 – Handle authentication
 – Serve several REST endpoints
 – Serve several web pages
Normally, you do not want to mix these heterogeneous responsibilities. Although this might seem the same as the too many responsibilities technique, it is not. It deals with the quality, not the quantity, of the responsibilities. An anti-pattern might be a service that serves web pages and also provides REST end-points, or serves as a worker.
Bounded context separation
This technique is important when an existing system is being partitioned into microservices. The name comes from a design pattern proposed by Martin Fowler.
It represents parts of the system that are relatively self-sufficient, so there are few links to a sever that can be turned into microservices. If a microservice needs to talk to 10 other microservices to complete its task, that can be an indication that the division was made in an incorrect place in the monolith.
Team organization
Many microservices systems are organized around teams that are responsible for writing the code. Therefore, microservice partition follows team lines to maximize team independence.
One of the key reasons microservices are popular as an architectural and organizational pattern is that they allow teams to plan, develop, and deploy features of a system in the cloud without tight coordination. The expectation, therefore, is that microservice numbers and size are dictated by organizational and technical principles.
A well-designed microservices system uses a combination of these techniques. These techniques require a degree of good judgment that is acquired with time and experience with the system. Until that experience is acquired, start in small steps with microservices that might be on the larger size (more like mini-services), until more “fault lines” have been observed for subsequent subdivisions.
 
Note: In a well-designed system fronted with a reverse proxy, this reorganization can be run without disruption. Where a single microservice was serving two URL paths, two new microservices can serve one path each. The microservice system design is an ongoing story. It is not something that must be done all at the same time.
3.4 Refactoring
Refactoring is a practice that modernizes the application and gains resources provided by new structures and platforms. The migration of a monolithic application to microservices follows the same path. Refactoring adds microservices to an application without changing the purpose of the application.
This section describes several techniques for refactoring a monolith application to microservices; it is largely based on the following IBM developerWorks® article by Kyle Brown:
3.4.1 A reason to refactor to microservices
Starting over with new runtimes and programming languages is costly, especially when much of the code is developed in Java and still works. Refactoring using microservices is a more cautious approach because you can keep the old system running and move the monolithic application in parts to a more sustainable and current platform.
3.4.2 Strategies for moving to microservices
Moving a monolith application to microservices can involve these strategies:
Convert the whole monolith application to microservices
The construction of a new application based on microservices from scratch can be the best option. However, because this approach can involve too much change at one time, it is risky and often ends in failure.
Refactor gradually
This strategy is based on refactoring the monolith application gradually by building parts of the system as microservices running together with the monolith application. Over time the amount of functionality provided by the monolith application shrinks until it is completely migrated to microservices. This is considered a careful strategy. Martin Fowler refers to this application modernization strategy as the strangler application.
Others aspects of this approach are as follows:
 – Do not add new features in the monolithic application; this prevents it from getting larger. For all new features, use independent microservices.
 – Create microservices organized around business capability, where each microservice is responsible for one topic.
 – A monolithic application usually consists of layers, such as presentation, business rules, and data access. You can create a microservice for presentation and create another microservice for business access data. The focus is always on functionality and business.
 – Use domain-driven design (DDD) Bounded Context, dividing a complex domain into multiple bounded contexts and mapping out the relationships between them, where a natural correlation between service and context boundaries exists.
3.4.3 How to refactor Java EE to microservices
Consider these important aspects when moving Java Platform and Java EE applications to microservices:
Repackaging the application (see Figure 3-2 on page 44)
Use the following steps:
a. Split up the enterprise archives (EARs).
Instead of packaging all of your related web archives (WARs) in one EAR, split them into independent WARs. This might involve some minor changes to code, or more likely to static content, if you change application context roots to be separate.
b. Apply the container-per-service pattern.
Next apply the container-per-service pattern and deploy each WAR in its own application server, preferably in its own container (such as a Docker container or a Bluemix instant runtime). You can then scale containers independently.
c. Build, deploy, and manage each WAR independently.
After they are split, you can manage each WAR independently through an automated DevOps pipeline (such as the IBM DevOps Pipeline Service). This is a step toward gaining the advantages of continuous delivery.
Figure 3-2 Repackaging the monolith application
Note: For more information about repackaging the monolith applications, see the following website:
Refactoring the monolith code (see Figure 3-3 on page 45):
After repackaging, where your deployment strategy is down to the level of independent WARs, you can start looking for opportunities to refactor:
 – Front-end
For simple program servlets/JSPs that are usually front ends to database tables, create a domain layer that you can represent as a RESTful service. Identifying your domain objects by applying domain-driven design can help you identify your missing domain services layer. After building, in the next phase you can refactor your existing servlet/JSP application to use the new service or build a new interface using JavaScript, HTML5, and CSS, or as a native mobile application.
HTTP session state: In this case, a good approach is moving the HTTP session state to a database.
 – SOAP or EJB services
Create the mapping to a RESTful interface and re-implement the EJB session bean interface or JAX-WS interface as a JAX-RS interface. To do this, you might need to convert object representations to JSON.
 – REST or JMS services
You might have existing services that are compatible, or can be made compatible, with a microservices architecture. Start by untangling each REST or simple JMS service from the remainder of the WAR, and then deploy each service as its own WAR. At this level, duplication of supporting JAR files is fine; this is still a matter of packaging.
Figure 3-3 Refactoring monolith code overview
Refactoring monolith data
After you build and repackage the small services defined in the previous steps, the next step might be the most difficult problem in adopting microservices. That step is to create a new data model for each microservice based on the current data model.
Here are rules to follow:
 – Isolated islands of data (see Figure 3-4 on page 46)
Begin by looking at the database tables that your code uses. If the tables used are either independent of all other tables or come in a small, isolated island of a few tables joined by relationships, you can split those out from the rest of your data design.
For defining which database to use, verify the types of queries that you want to use. If most of the queries you use are simple queries on primary keys, a key-value database or a document database might be the best option. However, if you do have complex joins that vary widely (for example, the queries are unpredictable), staying with SQL might be your best option.
 – Batch data updates
If you have only a few relationships and you decide to move your data into a NoSQL database anyway, consider whether you only need to do a batch update into your existing database. Often, when you consider the relationships between tables, the relationships do not consider a time factor; they might not always need to be up to date. A data dump and load approach that runs every few hours might be sufficient for many cases.
 – Table denormalization
If you have more than a few relationships to other tables, you might be able to refactor (or in database administrator terms, denormalize) your tables.
Often, the reason for using highly normalized schemas was to reduce duplication, which saved space, because disk space was expensive. However, disk space is now inexpensive. Instead, query time is now what you must optimize; denormalization is a straightforward way to achieve that optimization.
Figure 3-4 Refactoring monolith data overview
3.5 Identifying and creating a new architecture example
This section defines a new architecture based on the business case and monolith application described in 1.3.1, “Fictional Company A business problem” on page 23. This section follows the theory in 1.3.1 and provides the reasons and benefits for each new service.
3.5.1 Architecture: As is
The current application is a classic Java monolith application, defined in one EAR package and running inside an application server. The EAR package contains specific components for each part of the application, usually using the layers pattern. The current application uses JSF and Dojo for the front end, EJB for the business component, and JPA for persistent component and access (DB2® and SQL), as shown in Figure 3-5 on page 47.
Figure 3-5 Monolith application architecture overview
3.5.2 Architecture: To be
The new architecture is based on the platform as a service (PaaS) cloud environment. The architecture uses services to improve control in the development cycle (DevOps). It also uses other availabilities such as database, runtime, security, application server, and others to improve new features for the new application. It also offers more isolating scalability for each service.
Using gradual migration, the Catalog and Account components are moved to microservices, and the Order component is kept in the monolith application (see Figure 3-6 on page 48).
Figure 3-6 Microservice architecture overview
Details of each component are as follows:
PaaS platform
The PaaS provides a cloud-based environment with everything required to support the complete lifecycle of building and delivering web-based (cloud) applications, without the cost and complexity of buying and managing the underlying hardware, software, provisioning, and hosting.
PaaS offers the following benefits:
 – You can develop applications and get them to market faster.
 – You can deploy new web applications to the cloud in minutes.
 – Complexity is reduced with middleware as a service.
Security gateway
A secure web gateway is a type of security solution that prevents unsecured traffic from entering an internal network of an organization. It is used by enterprises to protect their employees and users from accessing and being infected by malicious web traffic, websites, viruses, and malware. It also ensures the implementation and compliance of an organization’s regulatory policy. In the new architecture, this component is used for integration with database SQL from a monolith application. In the new architecture, this service also is responsible to ensure integration with integrity and security with the monolith application.
Five distinct applications
These applications are as follows:
 – UI application using jQuery, Bootstrap, and AngularJS
A new front-end application using technologies and frameworks that enable running in different gadgets (mobile, tablet, desktop) is created. The necessary business information (Account, Catalog, and Order) is retrieved by using a REST API from the microservices.
 – Microservice application for Catalog
The component responsible for the Catalog component is moved to a microservice, becoming an isolated application for enabling better scalability. Also, a new search service is added using elastic technology to the microservice to improve search inside each element.
The catalog is updated using extract, transform, and load (ETL), which gets data from the SQL monolith application using the gateway service for integration.
 – Microservice application for Order
The Order component is kept in the monolith application, but a microservice is created to get Order information from the monolith application and uses a gateway service for integration.
 – Microservice application for Account
The component responsible for the Account component is moved to a microservice, becoming an isolated application for enabling better scalability. Also, a new NoSQL database is added for the Analytics service and integration with social networks. These new features improve potential for collecting information to use for offering new products.
 – Enterprise Data Center
The migration of a monolith application is partial. In this phase, the Order feature is kept in the monolith application. A new API interface is added to the monolith application to offer access to the Order information.
See Appendix A, “Additional material” on page 119 for information about accessing the web material for this publication.
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset