Testing
It is essential that a microservice application is built with an awareness of how it can be tested. Having good test coverage gives you more confidence in your code and results in a better continuous delivery pipeline.
This chapter covers the tests that are required at three different stages in the build lifecycle:
Single service testing
Tests carried out in isolation by a team that owns a particular microservice in a system.
Staging environment
Tests that are run on objects in a staging environment. The microservices that form a particular application are deployed into a staging environment for testing.
Production environment
Tests carried out on the live production system.
Tests should be automated as part of the build, release, run (delivery) pipeline.
The following topics are covered:
7.1 Types of tests
This chapter focuses on the Java specific testing considerations, assuming some prior knowledge of testing microservices. The following types of tests are covered:
Unit
Tests a single class or a set of closely coupled classes. These unit tests can either be run using the actual objects that the unit interacts with or by employing the use of test doubles or mocks.
Component
Tests the full function of a single microservice. During this type of testing, any calls to external services are mocked in some way.
Integration
Integration tests are used to test the communication between services. These tests are designed to test basic success and error paths over a network boundary.
Contract
Test the agreed contract for APIs and other resources that are provided by the microservice. For more information, see “Consuming APIs” on page 38.
End-to-end
Tests a complete flow through the application or microservice. Usually used to test a golden path or to verify that the application meets external requirements.
7.2 Application architecture
The internal structure of your microservice directly affects how easy it is to test. Figure 7-1 shows a simplified version of the microservice architecture described in Chapter 2, “Creating Microservices in Java” on page 9.
Figure 7-1 The internal microservice architecture.
The different internal sections break down into the following subsections:
Resources
Exposes JAX-RS resources to external clients. This layer can handle basic validation of requests, but any business logic is passed on.
Domain logic
This area is where the bulk of the functional code goes. This code has no understanding of external dependencies to the application.
Service connector
Handles communication with external services. This can be services within the microservice system or third-party services. Responses are validated and then passed to the domain logic layer.
Repositories
Includes data mapping and validation. The repositories layer has the logic that is required to connect to the external data store.
The different sections of the application have different testing requirements. For example, domain logic does not require integration testing, whereas resources do. All of the code in a particular Java class should provide function from the same section. Separating the code in this way results in all of the code in a particular class having the same testing requirements. This technique makes your test suite simpler and better structured. Given the inherent complexity of microservices, it is easy to get into a mess with your testing, so anything that can add structure to your tests is a good idea.
7.3 Single service testing
This section defines single service testing as the tests that the owners of an individual service must create and run. These tests exercise the code that is contained inside the logical boundary in the diagram of the internal microservice architecture as shown in Figure 7-1 on page 78. Three types of tests apply here: Unit tests, component tests, and integration tests. Unit and integration tests are used to test the individual parts of the microservice, and component tests are used to test the service as a whole.
7.3.1 Testing domain or business function
The code in your microservice that performs business function should not make calls to any services external to the application. This code can be tested by using unit tests and a testing framework such as JUnit1. The unit tests should test for behavior and either use the actual objects (if no external calls are needed) or mock the objects involved in any operations.
When writing tests using the actual objects, a simple JUnit test suffices. For creating mocks of objects, you can either use the built-in capabilities of Java EE or use a mocking framework. The @Alternatives annotation2 in the Context and Dependency Injection (CDI) specification enables injection of mock objects instead of the actual beans. Plenty of mocking frameworks are available for Java. For example, JMockit3 is designed to work with JUnit to allow you to mock objects during testing. In the most basic test using JMockit, you can create mocked objects by using the @Mocked annotation and define behavior when this object is called by using the Expectations() function.
Consider an online payment application that has an “Order” microservice. Suppose that a class called OrderFormatter takes in an account ID and returns an order history in a particular format as in Example 7-1. The DatabaseClient class handles requests to the database to get the list of orders. The OrderFormatter is @ApplicationScoped so that it can be injected into any REST endpoints that use it.
Example 7-1 The OrderFormatter class formats the response from the Orders class
@ApplicationScoped
public class OrderFormatter {
@Inject
DatabaseClient client;
public FormattedHistory getOrderHistory(String id) {
History orderHistory = client.getHistory(id);
FormattedHistory formatted = format(orderHistory;
return formatted;
}
private FormattedHistory format(History history) {
}
}
To test this class, make the assumption that the DatabaseClient class will return the correct orders from the getHistory() function. The behavior that you want to test is the actual formatting. However, you cannot call the format object directly because it is a private method. You want to avoid altering the code to enable testing, such as by changing the format() method to public. Instead, use a mocked version of the DatabaseClient class. Example 7-2 shows a simple test of the OrderFormatter class using JMockit and JUnit. The @Tested annotation indicates that the class that is being tested. This technique allows test objects to be injected into the class when the test is running. The @Injectable annotation is used to inject a mocked version of the DatabaseClient into the OrderFormatter object when the test runs. The Expectations() function defines the behavior of the mocked object.
Example 7-2 JMockit allows you to define what particular methods will return
public class TestOrderFormatter {
@Tested OrderFormatter formatter;
@Injectable DatabaseClient databaseClient;
@Test
public void testFormatter() {
String id = "Test ID";
new Expectations() {{
History history = // set history to be formatted
databaseClient.getHistory(id); returns(history);
}};
FormattedHistory expected = // set expected object
FormattedHistory actual = formatter.getOrderHistory(id);
assertEquals(expected.toString(), actual.toString());
}
}
7.3.2 Testing resources
Classes that expose JAX-RS endpoints or receive events should be tested by using two types of tests: Integration tests and contract tests.
Integration tests
Integration tests are used to verify the communication across network boundaries. They should test the basic success and failure paths in an exchange. Integration tests can either be run in the same way as unit tests, or by standing up the application on a running server. To run integration tests without starting the server, call the methods that carry JAX-RS annotations directly. During the tests, create mocks for the objects that the resource classes call when a request comes in.
To run integration tests on a running server, you can use one of the methods described in “Tests on a running server” on page 82. To drive the code under test, use the JAX-RS client provided by JAX-RS 2.0. to send requests.
Integration tests should validate the basic success and error paths of the application. Incorrect requests should return useful responses with the appropriate error code.
Consumer driven contract
A consumer of a particular service has a set of input and output attributes that it expects the service to adhere to. This set can include data structures, performance, and conversations. The contract is documented by using a tool like Swagger. See Chapter 2, “Creating Microservices in Java” on page 9 for more information. Generally, have the consumers of a service drive the definition of the contract, which is the origin of the term consumer driven contract.
Consumer driven contract tests are a set of tests to determine whether the contract is being upheld. These tests should validate that the resources expect the input attributes defined in the contract, but also accepts unknown attributes (it should just ignore them). They should also validate that the resources return only those attributes that are defined in the documentation. To isolate the code under test, use mocks for the domain logic.
Maintaining consumer driven contract tests introduces some organizational complexity. If the tests do not accurately test the contract defined, they are useless. In addition, if the contract is out of date, then even the best tests will not result in a useful resource for the consumer. Therefore, it is vital that the consumer driven contract is kept up to date with current consumer needs and that the tests always accurately test the contract.
Contract tests require the actual API to be implemented. This technique requires the application be deployed onto a server. See '“Tests on a running server” on page 82 for more information. Use tools such as the Swagger editor4 to create these tests. The Swagger editor can take the API documentation and produce implementations in many different languages.
Another dimension to contract testing is the tests that are run by the consumer. These tests must be run in an environment where the consumer has access to a live version of the service, which is the staging environment. See section 7.4.3, “Contract” on page 86 for more details.
Tests on a running server
A few different methods are available for starting and stopping the application server as part of your automated tests. There are Maven and Gradle plug-ins for application servers such as WebSphere Application Server Liberty that allow you to add server startup into your build lifecycle. This method keeps complexity out of your application and contains it in the build code. For more information about these plug-ins, see the following websites:
Maven
Gradle
Another solution is to use Arquillian. Arquillian can be used to manage the applications server during tests. It allows you to start and stop the server mid-test, or start multiple servers. Arquillian is also not affected by the container, so if you write Arquillian tests, they can be used on any application server. This feature is useful for contract testing because the consumers do not have to understand the application server or container that is used by the producer. For more information about Arquillian, see the following website:
For a more detailed comparison of these different methods, see “Writing functional tests for Liberty” at:
7.3.3 Testing external service requests
Inevitably, your microservice must make calls to external services to complete a request, such as calls to other microservices in the application or services external to the application. The classes to do this construct clients that make the requests and handle any failures. The code can be tested by using two sets of integration tests: One at the single service level and one in the staging environment. Both sets test the basic success and error handling of the client. More information about the tests in the staging environment is available in 7.4.2, “Integration” on page 86.
The integration tests at the single service level do not require the service under test or the external services to be deployed. To perform the integration tests, mock the response from the external services. If you are using the JAX-RS 2.0 client to make the external requests, this process can be done easily by using the JMockit framework.
Consider the online store example introduced in 1.5.1, “Online retail store” on page 7. Suppose the Account service must make a call to the Order service to get the list of recent orders for a particular account. The logic to make the request should be encapsulated in a class similar to Example 7-3. The OrderServiceClient is an @ApplicationScoped bean that is injected into the classes that use it. We have chosen to use CDI to inject the URL for the requests, rather than using a library that gets the service list from the registry as part of the request. In this case, we are just returning an empty History object if the response from the Order service is bad. Take advantage of the @PostConstruct methods to initialize the JAX-RS client because you can then call the method directly in the test if required.
Example 7-3 OrderServiceClient handles all requests to the Order service
@ApplicationScoped
public class OrderServiceClient {
@Resource(lookup = "orderServiceUrl")
String orderServiceUrl;
private Client client;
@PostConstruct
public void initClient() {
this.client = ClientBuilder.newClient();
}
public History getHistory(String id) {
String orderUrl = orderServiceUrl + "/" + id;
WebTarget target = client.target(orderUrl);
Response response = callEndpoint(target, "GET");
History history = getHistoryFromResponse(response);
return history;
}
private History getHistoryFromResponse(Response response) {
if (response.getStatus() == Response.Status.OK.getStatusCode()) {
return response.readEntity(History.class);
} else {
return new History();
}
}
private Response callEndpoint(WebTarget target, String callType) {
Invocation.Builder invoBuild = target.request();
Response response = invoBuild.build(callType).invoke();
return response;
}
}
To test this class, mock the response from the Order service as shown in Example 7-4. By putting the logic that interprets the response in a separate method to the one making the call, you can test this logic by directly invoking that method. JMockit provides a Deencapsulation class to allow invocation of private classes from tests. This technique provides test coverage for the handling of response codes from external services. It is also possible to mock the JAX-RS WebTarget or InvocationBuilder if your class structure requires this. Higher-level coverage of external requests is done in the staging environment.
Example 7-4 The response from the Order service is mocked to enable isolated testing
public class OrderServiceClientTest {
@Mocked
Response response;
OrderServiceClient orderHistoryClient = new OrderServiceClient();
@Test
public void test200() {
new Expectations() {{
History user = new History();
user.addOrder("Test");
response.getStatus(); returns(200);
response.readEntity(History.class); returns(user);
}};
History history = Deencapsulation.invoke(orderHistoryClient, "getHistoryFromResponse", response);
assertTrue("Expected string Test", history.getOrders().contains("Test"));
}
@Test
public void test500() {
new Expectations() {{
response.getStatus(); returns(500);
}};
History history = Deencapsulation.invoke(orderHistoryClient, "getHistoryFromResponse", response);
assertTrue("Expected empty order list", history.getOrders().isEmpty());
}
}
7.3.4 Testing data requests
In a microservice architecture, each microservice owns its own data. See Chapter 5, “Handling data” on page 41 for more information. If you follow this guideline, the developers of a microservice are also responsible for any external data stores used. As described in 7.2, “Application architecture” on page 78, the code that makes requests to the external data store and performs data mapping and validation is contained in the repositories layer. When testing the domain logic, this layer should be mocked. Tests for data requests, data mapping, and validation are done by using integration tests with the microservice and a test data store deployed locally or on a private cloud. The tests check the basic success and error paths for data requests. If the data mapping and validation for your application requires extensive testing, consider separating out this code and testing it using a mocked database client class.
Test data
The local version of the data store must be populated with data for testing. Think carefully about what data you put in the data store. The data should be structured in the same way as production data but should not be unnecessarily complicated. It must serve the specific purpose of enabling data request tests.
7.3.5 Component testing
Component tests are designed to test an individual microservice as one piece. The component is everything inside the network boundary, so calls to external services are either mocked or are replaced with a “test-service.” There are advantages and disadvantages to both scenarios.
Using mocks
By mocking the calls to external services, you have fewer test objects to configure. You can easily define the behavior of the mocked system by using frameworks like JMockit, and no tests will fail due to network problems. The disadvantage of this approach is that it does not fully exercise the component because you are intercepting some of the calls, increasing the risk of bugs slipping through.
Test services
To fully exercise the communication boundaries of your microservice, you can create test services to mimic the external services that are called in production. These test services can also include a test database. The test services can also be used as a reference for consumers of your microservice. The disadvantage of this system is that it requires you to maintain your test services. This technique requires more processor cycles than maintaining a mocking system as you must fully test the test microservice and create a deployment pipeline.
After you are using a mocking framework for other levels of testing, it makes sense to reuse those skills. However, if you do take the mocking approach, you must make sure that the tests in your staging environment exercise inter-service communications effectively.
7.3.6 Security verification
Security is important in a distributed system. You can no longer put your application behind a firewall and assume that nothing will break through. For more details about securing your microservices, see Chapter 6, “Application Security” on page 67.
Testing the security of your microservice is slightly different depending on how you implement security. If the individual services are just doing token validation, then test at the individual service level. If you are using another service or a library to validate the tokens, then that service should be tested in isolation and the interactions should be tested in the staging environment.
The final type of tests to run is security scanners such as IBM Security AppScan® to highlight security holes. AppScan scans your code and highlights any vulnerabilities. It also provides recommendations for fixing the vulnerabilities. For more information about Security AppScan, see the following website:
7.4 Staging environment
This section defines a staging environment as a test environment that is identical (where possible) to the production environment. The build pipeline deploys successfully tested microservices to the staging environment where tests are run to verify the communication across logical boundaries, that is, between microservices.
7.4.1 Test data
The staging environment should include any data stores that will be in your production system. As described in 7.3.4, “Testing data requests” on page 84, the test data should not be unnecessarily complicated. The data in this data store will be more complete than at the individual microservice level, as these tests are testing more complicated interactions.
Use tools to inject data into tests for you. Tools allow you to have more control over the flow of data around the system, enabling you to test what happens if bad data is introduced. The microservice orchestration and composition solution Amalgam8 offers this feature using its controller component. The controller passes information between the service registry and the microservice side-car. It can inject data dynamically to simulate the introduction of bad data into the system.
For more information about Amalgam8, see the following website:
7.4.2 Integration
Integration tests are used to test the interactions between all the services in the system. The in-depth behavior of the individual services has already been tested at this stage. The consumer driven contract tests should have ensured that the services interact successfully, but these tests identify bugs that have been missed. The tests should check the basic success and error paths of service communication with the application deployed. Use the test data as discussed in the previous section.
Rather than testing all of the services at once, it might still be necessary to mock out some of the services during testing. Test the interaction of two specific services, or a small set of services, adding in mocked behavior when calls are made to outside the set. Mocking the calls to the services outside the group under test is done in the same way as the unit tests on the APIs. The same techniques that are used to start and stop the server or container for contract testing also apply here (see 7.3.2, “Testing resources” on page 81).
7.4.3 Contract
Every service that consumes another service or resource should have a set of contract tests that are run against that resource (especially in staging environments). Given that services evolve independently over time, it is important to ensure that the consumer's contract continues to be satisfied.
These tests are specifically written by the consumer (the client side), and are run and managed as part of the test suite for the consuming service. By contrast, the tests that are discussed in 7.3.2, “Testing resources” on page 81 are written by the provider to verify the contract as a whole.
7.4.4 End-to-end
End-to-end testing is essential to find the bugs that were missed previously. End-to-end tests should exercise certain “golden paths” through the application. It is unrealistic to test every path through the application with an end-to-end test, so identify the key ones and test those. A good way to identify these paths through the environment is to review the key external requirements of an application. For example, if your application is an online retail store you might test the following paths:
User logs in
User purchases an item
User views the summary of the order
User cancels the order
End-to-end testing should also include the user interface. Tools such as SeleniumHQ can be used to automate interactions with a web browser for testing. For more information about SeleniumHQ, see the following website:
7.4.5 Fault tolerance and resilience
A microservice should be designed in a fault tolerant way as described in 4.2, “Fault tolerance” on page 37. Various techniques can be used to increase the resilience and fault tolerance of individual microservices, but you should still test how fault tolerant your system is as a whole. Tests for fault tolerance should be performed with all of the services deployed. Do not use any mocks.
Taking down individual microservices
In a microservice system, you must not rely on all of the microservices being available at any one time. During the testing phase, make requests to the system while taking down and redeploying individual services. This process should include the microservices in the system and backend data stores. Monitor the time for requests to return and identify an acceptable response time for your application. If the response times are too long, consider reducing the timeout values on your services or altering the circuit breaker configurations.
There are tools that can automate this process, the best known of which within the Java community is Netflix Chaos Monkey, which terminates instances of virtual machines to allow testing of fault tolerance. For more information about Netflix Chaos Monkey, see the following website:
Other tools, such as Gremlin, which is used by Amalgam8, take the approach of intercepting and manipulating calls to microservices rather than shutting down instances. For more information about Gremlin, see this website:
Injecting bad data
As in the integration tests, tools such as Amalgam8 can be used to automate the injection of bad data during testing. If you have successfully used the bulk head pattern, then bad data in one microservice should not propagate to other microservices.
Stress testing
Microservices should be able to handle unexpected loads. Stress testing should be used to test the bulk heads in your microservices. If a particular microservice is holding up requests, look at the configured bulk heads.
Recoverability testing
Individual microservices should also be resilient to problems in their own deployment. If a container or virtual machine goes down, the microservice needs to be able to recover quickly. Orchestration tools such as Amalgam8 or Kubernetes5 (for container orchestration) will spin up new instances of any application if one goes down. A microservice must have fast startup time and, when a shutdown is required, it should shut down gracefully. Using an application server that provides a fast startup time is also essential.
Couple your recoverability testing with taking down service instances. Allow your orchestration system to create the new instances and monitor how long it takes to start the new service.
7.5 Production environment
To really test your system, you should consider running tests on the production environment. With so many moving parts and services constantly coming and going, you should not expect the system to continue working perfectly indefinitely. It is important to add monitoring and analytics to your application to warn of any bad behavior. For more information, see Chapter 9, “Management and Operations” on page 107.
Repeat these specific tests from the staging environment in the production environment:
Injecting test or bad data
Taking down services, both to test the fault tolerance of other services and the recoverability of the service you took down
Security verification
Use the same tools and techniques as in the staging environment. Be careful about when you run these tests. Do not run them during peak hours. Instead, select a timeslot that has the smallest impact on your clients as possible, just in case something goes down.
7.5.1 Synthetic monitoring
To determine the health of your production system, create synthetic transactions to run through your application. Use a browser emulator to instigate the synthetic transactions and monitor the responses. Look for invalid responses or large response times. You can also run the automated build system periodically. If you are committing new code frequently, the builds will also be running frequently, but it is likely that particular services might go through a time where fewer commits are made. During this time, if you continue running builds, you will discover any changes to the application dependencies that might cause problems.
7.5.2 Canary testing
Canary tests are named after the proverbial canary taken into the coal mine to serve as an early indicator of unsafe operating conditions for miners. They are minimal tests that verify all dependencies and basic operating requirements for a service are present and functioning. If the environment is unhealthy, canary tests fail quickly, providing an early indicator for a botched deployment.
 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset