In previous chapters, we learned how to build and deploy a Mule application. The next step is testing our application, which is important to ensure we deliver bug-free projects.
In this chapter, let us explore testing, types of testing, different testing tools, and ways to test a Mule application using MUnit and the MUnit Test Recorder. In Chapter 3, Exploring Anypoint Studio, we created and ran a simple Mule application called HelloWorld. Here, we will be learning how to test the HelloWorld Mule application using MUnit and the MUnit Test Recorder.
After reading this chapter, you’ll come away with knowledge about the following topics:
The prerequisites for this chapter are as follows:
Testing is mainly done to check whether the software product is working as expected based on the requirements. The tester will verify the product, module, or code of the software product by running some test cases manually or automatically. This helps to identify bugs at the early stages of the product life cycle.
Before starting the actual testing, the tester will write the test cases and test suites. A test case is a sequence of steps that the tester wants to test and verify against the functionality to ensure the system is working properly. A test suite is a collection of test cases.
There are many types of testing available to test the software. Some of them are as follows:
We will use these types of testing while working on Mule projects.
Performing different types of testing provides us with the following benefits:
With this, we have understood the basics of testing, its types, and its benefits.
Let’s move on to learn about different testing tools.
There are many testing tools available to perform manual testing and automated testing. Manual testing is used for manually testing test cases. In automated testing, we script the test cases and execute them automatically.
Testing tools can be grouped as follows:
These tools help to reduce the time taken by the testers in their day-to-day testing activities. There are many useful tools on the market to perform manual and automated testing. Some of the most commonly used testing tools are as follows:
With this, we have understood the different types of testing tools and a few examples of tools that are commonly used by testers.
Let’s deep dive into a few tools that are used in Mule projects, such as Postman, SoapUI, and JMeter.
Postman is an application used for API testing. It acts as an API/HTTP client to send a request to any web service endpoint. While sending the request, we can send the required standard or custom HTTP headers, such as content-type or accept. We can also configure different types of authorization, such as basic authentication, a bearer token, OAuth, and an API key.
As this tool is related to API testing, we can send the required value in the query parameters and URI parameters of the URL. This tool supports the testing of all types of HTTP methods, such as get, post, put, patch, delete, and head. If it is a post or put method, we can send the required payload in the body of the request.
After we have sent the request to the API URL, we will receive the response payload, the HTTP response status code, and the status description from the API. We can also check the response time to understand how much time it has taken to receive the response.
We can save each API request and group the related APIs into collections for future testing.
After configuring the API URL for testing, we can extract the code snippet in different languages to share it with the developer for development.
Postman also enables configuring environments in order to test the API using DEV, SIT, and QA endpoints. For any values specific to an environment, define them in the environment variable. Create the new environment variable in the Environments section for each environment.
While calling the API for each environment, the username and password differ. In that case, we can set the variables for username and password in the environment variable to specify their values. We can use those variables to call the API using the {{ }} syntax.
Figure 11.1 – Postman – environment variable
Figure 11.1 shows the Postman Environments section with two environment details, DEV and TEST. In DEV, it shows the username and password environment variables with their respected values. After adding variables and values, click Save.
While calling APIs, we can substitute the username and password variables by using {{username}} and {{password}}, as shown in Figure 11.2:
Figure 11.2 – Postman – Authorization
As shown in Figure 11.2, we need to select the appropriate environment to call the API. Here, we are calling the DEV environment API with its username and password.
Let us look at the Postman application home screen, which has many options/features as shown in the following screenshot.
Figure 11.3 – Postman application
Figure 11.3 shows the Postman home screen, which shows endpoint details, the method, the HTTP request, the response, the HTTP status code, code snippets, and other information.
To change any settings in Postman, click the settings icon and select Settings, as shown in Figure 11.4:
Figure 11.4 – Postman – settings
In the settings, we can set the request timeout value, set SSL certificate verification to on or off, configure certificates, configure proxy details, export data, and so on.
With this, we have understood the essentials of the Postman application.
Let us next explore how to create test cases in SoapUI.
SoapUI is an open source testing application, and it is commonly used for API manual and load testing. Using this tool, we can test SOAP, REST-based APIs, JMS, and other components.
In this section, let us create a SOAP project using the WSDL specification. WSDL refers to a Web Service Definition Language file, which will be in .xml format. This specification file has a request and response structure and endpoint details. Using a WSDL file, any developer can design, develop, and test their SOAP-based APIs.
Let’s follow these steps to create a SOAP project:
Figure 11.5 – SoapUI – New SOAP Project
We can browse the WSDL from a local file location or an HTTP/HTTPS URL. Once we click the OK button, it creates the SOAP project with all the operations, along with sample requests and endpoint details based on the information available in the WSDL file specification. In Figure 11.6, toward the left of the screen, AddInteger, DivideInteger, and the other operations that we can see under the project are SOAP operations:
Figure 11.6 – SoapUI – SOAP project
As shown in the preceding figure, we can see the sample request structure and click on the Run button to send the request to the endpoint URL. Clicking Run invokes the actual endpoint URL and provides the response in SoapUI.
We can also create the test suite and test case by selecting the options from the sample request.
Figure 11.7 – SoapUI – test case
Upon clicking Add to TestCase, it will create a Test Suite with a Test case. Using these test cases, we can execute these cases to verify our functionalities are working as expected or not.
The load test cases will be created now. We can specify the limit of transactions that we want to run within the specified time, and then we can execute the load testing.
Figure 11.8 – SoapUI – load testing
As shown in Figure 11.8, we have tested sending 100 transactions to our API endpoint, a maximum of 5 messages at a time with a 1,000 millisecond (1 second) delay/interval after every 5 messages. This load test started at 12:57:50 P.M. and completed at around 12:58:12 P.M. This means we were able to complete 100 transactions within 22 seconds. In this test, the API response minimum, maximum, and average times were 254 milliseconds, 1,169 milliseconds, and 317 milliseconds, respectively.
We can change the values of Limit, Total Runs, and Test Delay to test more transactions to understand the API response and failure rates from the load testing.
With this, we have understood how to create a SOAP project, test case, and load test case, and also how to test using SoapUI.
Now, let’s move on to learning how to perform load testing using JMeter.
Performance is an important factor for any web- or mobile-based applications, as well as other applications. In order to measure the performance of the application, we need to send different workloads to our application so that we can measure the performance of our application. We use the JMeter tool to perform load testing and measure the performance. Apache JMeter is an open source application built on the Java platform. It is platform independent and works in Windows, Linux, and any other OS. It is mainly used for load, stress, and web service testing. This helps to ensure that our application performs well with different workloads.
Using JMeter, we can perform load testing for HTTP, FTP, JDBC, JMS, Java, and other components.
Let’s follow these steps to create and execute a test plan:
Figure 11.9 – Launching JMeter
It opens the Apache JMeter GUI in a new window.
Figure 11.10 – Apache JMeter – home window
When the JMeter GUI opens, it displays Test Plan on the left side of the user interface.
Figure 11.11 – JMeter – adding a thread group
A thread group in JMeter simulates concurrent requests to the API endpoint. After the execution, we can view the results in various formats, such as a graph, a table, a tree, or logs.
Figure 11.12 – JMeter – Thread Group configuration
This configuration makes sure that JMeter posts five requests, waits for 1 second, and then continues the loop four times in total. Overall, it calls the API endpoint 20 times.
Setting Continue in the Thread Group configuration means it will continue testing even when some test fails.
Figure 11.13 – JMeter – Sampler | HTTP Request
Here, we choose HTTP Request as we are going to invoke an HTTP-based web service for our testing. To test with a database, we would choose JDBC request.
Figure 11.14 – JMeter – HTTP Request configuration
Figure 11.15 – JMeter – adding a listener
Here, we select View Results in Table. We can choose one of the other options to view results in graph or tree format, based on our preferences.
Figure 11.16 – JMeter – start the test
Now, we will be able to see that the test execution started and executed 20 times as per our Thread Group configuration.
Figure 11.17 – JMeter – test execution results
In Figure 11.17, we can see that the overall execution was completed and each request took approximately a few milliseconds to get the web service response. This ensures our application is working fine with the workload of five concurrent requests for four loops. We can increase the workload by increasing the number of threads in our configuration to check the performance of our application.
With this, we have understood how to use JMeter to create and execute the test plan for load/performance testing.
Now, let’s explore MUnit in Anypoint Studio.
MUnit is a unit testing framework to test a Mule application. It provides complete unit testing capabilities within Anypoint Studio. MUnit has two modules, MUnit and MUnit Tools.
The MUnit module has the following operations:
The MUnit Tools module has the following operations to validate whether it is working as expected or not:
Using MUnit, we can perform the following actions:
Let’s try to create a test suite to execute our test cases.
As we have learned, a test suite is a collection of test cases. In order to create a test suite, we need to create a Mule application first. Instead of creating a Mule application from scratch, we will use the HelloWorld Mule application, which we developed in Chapter 3, Exploring Anypoint Studio. The application has HTTP Listener with the /hello endpoint, Logger to log the Welcome to Hello world application message, and Transform Message to output { message: "Hello World" }. If you did not create this Mule application earlier, then you can use the .jar file to import the application into Anypoint Studio using the File menu option. Select Import and then click Packaged mule application (.jar) to create the Mule application.
Now, let us create the test suite using MUnit:
Figure 11.18 – MUnit – creating a blank test
This creates a new test suite in /src/test/munit/. The test suite contains Behavior, Execution, and Validation. In the test suite, Behavior sets the input for the test suite. The Execution step calls the actual flow in the Mule application. In the Validation step, we can write any kind of condition to validate the Mule application output. We can also see two different modules (MUnit and MUnit Tools) added in Mule Palette, as shown in Figure 11.19:
Figure 11.19 – MUnit – test suite
Figure 11.20 – MUnit – test suite Validation
What will happen is, when the test suite is run, the Execution section calls the HelloWorld flow and provides a result of { message: "Hello World" }. At this time, the Validation section compares the flow result with the expected value of Hello World, as mentioned in the Assert equals condition. If the value matches the actual result, then it will give a valid output response.
Figure 11.21 – MUnit – test suite coverage report
If all the flow steps are executed in a Mule application, then it will show 100% coverage. As our test suite is successful, the Failures count is 0. This coverage report is also very useful when we have continuous integration (CI) and continuous deployment (CD) set up in MuleSoft. it checks the coverage report percentage to decide whether to continue with deployment to the target environment. For example, if the coverage is around 40%, then it will not proceed with deployment to the target environment. If the coverage is more than 80%, then it will proceed with deployment to the target environment.
Figure 11.22 – MUnit – test suite MUnit Errors pane
Also, here, it will show 100% coverage as it has executed all the steps in the Mule application. As this test suite failed, it shows the Failures count as 1 and it also shows more details about the error in the MUnit Errors view.
In the preceding example, we saw a Mule application with just one flow. If we have two flows in the same Mule application, then the test suite file will have two different flows to handle the unit test cases, as shown in Figure 11.23:
Figure 11.23 – MUnit – test suite with multiple flows
With this, we have understood how to create and run a test suite in Anypoint Studio.
Next, let’s explore the MUnit Test Recorder in Anypoint Studio.
It is time consuming to write each and every test suite manually. Hence, we will use the MUnit Test Recorder to create automated test suites and capture the input.
In a Mule application, we can use the Record test for this flow option in the flow to create the required test suites and also start the recording to capture the inputs. The input can be query parameters, URI parameters, or the request payload.
The MUnit Test Recorder only automatically creates the test suite for successful scenarios. For any failure test scenarios or additional conditions, we need to create additional test cases manually. We can debug the test suites by adding breakpoints.
Let’s create a test suite using the Test Recorder.
In this section, we will learn how to create test suites in a Mule application using the MUnit Test Recorder:
It starts the Mule runtime in Anypoint Studio and deploys the application. Once the application is deployed, it shows the DEPLOYED status. A dialog box titled Test Recorder with the message Waiting for input data appears, as shown in Figure 11.24:
Figure 11.24 – MUnit – Test Recorder “Waiting for input data…”
In order to capture the input, let us send a request from Postman to our Mule application endpoint.
Figure 11.25 – Postman – sending a request to a Mule application
Once you send the request, it reaches the Mule application and executes all the steps there. We can see the Logger message in the console, which confirms the execution of the Mule application. Finally, it saves/records the input/output, as shown in the Test Recorder pop-up dialog box in Figure 11.26:
Figure 11.26 – Test Recorder – “Input recorded”
Once we click Configure test, the New Recorded Test Welcome dialog box appears.
Figure 11.27 – New Recorded Test Welcome screen
In Figure 11.27, we have given the default values for the test suite’s filename and test name, but we can change it to any name as per our naming convention. Upon clicking Next, the Configure Test dialog box appears.
Figure 11.28 – New Recorded Test Configure Test
It shows the name of the Mule application and its flow steps. The Set input and assert output section shows the input and output captured and recorded during an earlier run from Postman. The Flow Input tab captures the input received through attributes and the payload. As we have sent the request as a get method, we don’t have any input payloads in the Flow Input section.
Figure 11.29 – Flow Input
We can use these as test suite input. Now, let us see what information is available in the Flow Output tab.
The Flow Output tab captures the output (attributes and payload) received after the execution of the Mule application.
Figure 11.30 – Flow Output
We can use these output values for validation in our test suite.
Figure 11.31 – New Recorded Test Test Summary
Now our test suite is created successfully with all the supported test suite .xml and .dwl files.
The test suite has three steps: Behavior, Execution, and Validation.
The Behavior step sets the inputs that we captured earlier using the MUnit Test Recorder. This input data will be taken from the set-event-attributes.dwl and set-event-payload.dwl files, as shown in Figure 11.32.
set-event-attributes.dwl and set-event-payload.dwl have all the attributes data and payload data, respectively. As it is a get method, the set-event-payload.dwl file will be empty. If it were a post method, then it would have had input data in JSON or XML or any other format captured in a .dwl file.
Figure 11.32 – MUnit test suite Behavior
The Execution step calls the flow and passes all the values received from the Behavior step.
The Validation step compares the Mule application output against the captured output that is available in the recorded file (assert_expression_payload.dwl).
Figure 11.33 – Recorded output payload
If validation fails, then the Validation step in the test suite will throw an error with the message The payload does not match.
Figure 11.34 – MUnit test suite Validation
With this, we have understood how to record input and output in a Mule application and use those captured inputs and outputs in our tests using the MUnit Test Recorder.
In this chapter, we had a look at the basics of testing and the various types of testing tools that are available.
We created a Mule application using a .jar file, tried to create a test case using the MUnit framework, and carried out tests using MUnit.
We also saw how the MUnit Test Recorder helps to create test cases automatically.
On completing this chapter, you expanded your knowledge of how to test a Mule application using the MUnit testing framework and I am sure that you are now confident enough to test your own Mule application.
In the next chapter, Chapter 12, MuleSoft Integration with Salesforce, we’ll explore one of the ways to integrate with Salesforce using the Salesforce connector.
Try the following assignments to explore more on MUnit.
Take a moment to answer the following questions to serve as a recap of what you just learned in this chapter: