11

Testing Your Application

In previous chapters, we learned how to build and deploy a Mule application. The next step is testing our application, which is important to ensure we deliver bug-free projects.

In this chapter, let us explore testing, types of testing, different testing tools, and ways to test a Mule application using MUnit and the MUnit Test Recorder. In Chapter 3, Exploring Anypoint Studio, we created and ran a simple Mule application called HelloWorld. Here, we will be learning how to test the HelloWorld Mule application using MUnit and the MUnit Test Recorder.

After reading this chapter, you’ll come away with knowledge about the following topics:

  • Different types of testing
  • Different types of testing tools
  • Commonly used testing tools, such as Postman, SoapUI, and JMeter
  • How to create and run a test suite using MUnit
  • How to create and run a test suite using the MUnit Test Recorder

Technical requirements

The prerequisites for this chapter are as follows:

Introduction to testing

Testing is mainly done to check whether the software product is working as expected based on the requirements. The tester will verify the product, module, or code of the software product by running some test cases manually or automatically. This helps to identify bugs at the early stages of the product life cycle.

Before starting the actual testing, the tester will write the test cases and test suites. A test case is a sequence of steps that the tester wants to test and verify against the functionality to ensure the system is working properly. A test suite is a collection of test cases.

There are many types of testing available to test the software. Some of them are as follows:

  • Unit testing: This helps to validate the smallest portion of the software product, for example, testing a specific program in an entire software product.
  • Functional testing: This verifies the functionality of the product and whether it is working as expected or not based on the functional requirements.
  • Performance/load testing: This validates the functionality by sending lots of requests (huge workloads) at the same time and at different intervals, for example, testing e-commerce platforms or websites with a large number of users (1,000+) at the same time to check the performance of the product.
  • Regression testing: This is used to check whether any recent functionalities added to the product break the project functionality. When we are short on time to perform all the test cases again, this type of testing is preferred.
  • Stress testing: This testing is used to check what the maximum volume/stress the system can accept is. For example, say we start testing with 500 concurrent users on an e-commerce platform. Assume it is working well with 500 users. Then, we will try a higher limit, such as 600 concurrent users, to verify whether the system can accept 600 requests at the same. If the system goes down, then we will know that the maximum volume that our system can accept is 500 and not 600 users.
  • Integration testing: This is used to verify whether the different software products or components are functioning together or not. For example, our Mule application picks the data from the database and sends it to the partner system’s web service. This testing ensures end-to-end functionality from the database to the partner’s systems.
  • User Acceptance testing: This testing is done to check if the system is as per user’s requirements.
  • Vulnerability testing: This is used to check whether any security-related issues are present in the software product.

We will use these types of testing while working on Mule projects.

Performing different types of testing provides us with the following benefits:

  • Cost effective and saves money
  • Customer satisfaction because of the quality of the product
  • Addresses any design issues and poor design decisions at an early stage
  • Improved security
  • Better performance of the software product

With this, we have understood the basics of testing, its types, and its benefits.

Let’s move on to learn about different testing tools.

Getting familiar with testing tools

There are many testing tools available to perform manual testing and automated testing. Manual testing is used for manually testing test cases. In automated testing, we script the test cases and execute them automatically.

Testing tools can be grouped as follows:

  • Test management tool: To track test cases and execution
  • Defect/bug tracking tool: To log the defects
  • Mobile testing tool: To test different mobile devices that run on iOS, Android, and other operating systems (OSs)
  • Integration testing tool: To test two or more modules together in order to verify, if all the modules are working together or not.
  • API testing tool: To test web services
  • Load/performance testing tool: To check the performance of the system
  • Security testing tool: To check any security vulnerability in the software product

These tools help to reduce the time taken by the testers in their day-to-day testing activities. There are many useful tools on the market to perform manual and automated testing. Some of the most commonly used testing tools are as follows:

  • Selenium (automation)
  • Postman and Newman (API testing)
  • JMeter (load testing)
  • Gatling (load testing)
  • HP ALM (test management)
  • Jira and Bugzilla (test, defect, and bug tracking)
  • Appium (mobile automation)
  • Tricentis Tosca (automated testing) and Katalon Studio (licensed automation tool)
  • BrowserStack (websites and mobile testing) and SeeTest (mobile testing)
  • Load runner (load testing)

With this, we have understood the different types of testing tools and a few examples of tools that are commonly used by testers.

Let’s deep dive into a few tools that are used in Mule projects, such as Postman, SoapUI, and JMeter.

Postman

Postman is an application used for API testing. It acts as an API/HTTP client to send a request to any web service endpoint. While sending the request, we can send the required standard or custom HTTP headers, such as content-type or accept. We can also configure different types of authorization, such as basic authentication, a bearer token, OAuth, and an API key.

As this tool is related to API testing, we can send the required value in the query parameters and URI parameters of the URL. This tool supports the testing of all types of HTTP methods, such as get, post, put, patch, delete, and head. If it is a post or put method, we can send the required payload in the body of the request.

After we have sent the request to the API URL, we will receive the response payload, the HTTP response status code, and the status description from the API. We can also check the response time to understand how much time it has taken to receive the response.

We can save each API request and group the related APIs into collections for future testing.

After configuring the API URL for testing, we can extract the code snippet in different languages to share it with the developer for development.

Postman also enables configuring environments in order to test the API using DEV, SIT, and QA endpoints. For any values specific to an environment, define them in the environment variable. Create the new environment variable in the Environments section for each environment.

While calling the API for each environment, the username and password differ. In that case, we can set the variables for username and password in the environment variable to specify their values. We can use those variables to call the API using the {{ }} syntax.

Figure 11.1 – Postman – environment variable

Figure 11.1 – Postman – environment variable

Figure 11.1 shows the Postman Environments section with two environment details, DEV and TEST. In DEV, it shows the username and password environment variables with their respected values. After adding variables and values, click Save.

While calling APIs, we can substitute the username and password variables by using {{username}} and {{password}}, as shown in Figure 11.2:

Figure 11.2 – Postman – Authorization

Figure 11.2 – Postman – Authorization

As shown in Figure 11.2, we need to select the appropriate environment to call the API. Here, we are calling the DEV environment API with its username and password.

Let us look at the Postman application home screen, which has many options/features as shown in the following screenshot.

Figure 11.3 – Postman application

Figure 11.3 – Postman application

Figure 11.3 shows the Postman home screen, which shows endpoint details, the method, the HTTP request, the response, the HTTP status code, code snippets, and other information.

To change any settings in Postman, click the settings icon and select Settings, as shown in Figure 11.4:

Figure 11.4 – Postman – settings

Figure 11.4 – Postman – settings

In the settings, we can set the request timeout value, set SSL certificate verification to on or off, configure certificates, configure proxy details, export data, and so on.

With this, we have understood the essentials of the Postman application.

Let us next explore how to create test cases in SoapUI.

SoapUI

SoapUI is an open source testing application, and it is commonly used for API manual and load testing. Using this tool, we can test SOAP, REST-based APIs, JMS, and other components.

Creating a SOAP project

In this section, let us create a SOAP project using the WSDL specification. WSDL refers to a Web Service Definition Language file, which will be in .xml format. This specification file has a request and response structure and endpoint details. Using a WSDL file, any developer can design, develop, and test their SOAP-based APIs.

Let’s follow these steps to create a SOAP project:

  1. Open the SoapUI application, click the File menu option, and select New SOAP Project.
  2. Provide the project name, browse the WSDL file location, and click OK.
Figure 11.5 – SoapUI – New SOAP Project

Figure 11.5 – SoapUI – New SOAP Project

We can browse the WSDL from a local file location or an HTTP/HTTPS URL. Once we click the OK button, it creates the SOAP project with all the operations, along with sample requests and endpoint details based on the information available in the WSDL file specification. In Figure 11.6, toward the left of the screen, AddInteger, DivideInteger, and the other operations that we can see under the project are SOAP operations:

Figure 11.6 – SoapUI – SOAP project

Figure 11.6 – SoapUI – SOAP project

As shown in the preceding figure, we can see the sample request structure and click on the Run button to send the request to the endpoint URL. Clicking Run invokes the actual endpoint URL and provides the response in SoapUI.

We can also create the test suite and test case by selecting the options from the sample request.

  1. Select the Request option and choose Add to TestCase to create the test suite and test cases.
Figure 11.7 – SoapUI – test case

Figure 11.7 – SoapUI – test case

Upon clicking Add to TestCase, it will create a Test Suite with a Test case. Using these test cases, we can execute these cases to verify our functionalities are working as expected or not.

  1. If you want to perform load testing for a specific API to measure the performance, then right-click on the TestCase and select Load Tests, then click New LoadTest.

The load test cases will be created now. We can specify the limit of transactions that we want to run within the specified time, and then we can execute the load testing.

Figure 11.8 – SoapUI – load testing

Figure 11.8 – SoapUI – load testing

As shown in Figure 11.8, we have tested sending 100 transactions to our API endpoint, a maximum of 5 messages at a time with a 1,000 millisecond (1 second) delay/interval after every 5 messages. This load test started at 12:57:50 P.M. and completed at around 12:58:12 P.M. This means we were able to complete 100 transactions within 22 seconds. In this test, the API response minimum, maximum, and average times were 254 milliseconds, 1,169 milliseconds, and 317 milliseconds, respectively.

We can change the values of Limit, Total Runs, and Test Delay to test more transactions to understand the API response and failure rates from the load testing.

With this, we have understood how to create a SOAP project, test case, and load test case, and also how to test using SoapUI.

Now, let’s move on to learning how to perform load testing using JMeter.

JMeter

Performance is an important factor for any web- or mobile-based applications, as well as other applications. In order to measure the performance of the application, we need to send different workloads to our application so that we can measure the performance of our application. We use the JMeter tool to perform load testing and measure the performance. Apache JMeter is an open source application built on the Java platform. It is platform independent and works in Windows, Linux, and any other OS. It is mainly used for load, stress, and web service testing. This helps to ensure that our application performs well with different workloads.

Using JMeter, we can perform load testing for HTTP, FTP, JDBC, JMS, Java, and other components.

Let’s follow these steps to create and execute a test plan:

  1. If your OS is Windows, then open JMeter from the installation path, C:apache-jmeter-5.4.3injmeter.bat, as shown in Figure 11.9:
Figure 11.9 – Launching JMeter

Figure 11.9 – Launching JMeter

It opens the Apache JMeter GUI in a new window.

Figure 11.10 – Apache JMeter – home window

Figure 11.10 – Apache JMeter – home window

When the JMeter GUI opens, it displays Test Plan on the left side of the user interface.

  1. Change the name of Test Plan to Test Plan Hello World in the Name field.
  2. Right-click Test Plan Hello World, then select Add | Threads (Users) | Thread Group.
Figure 11.11 – JMeter – adding a thread group

Figure 11.11 – JMeter – adding a thread group

A thread group in JMeter simulates concurrent requests to the API endpoint. After the execution, we can view the results in various formats, such as a graph, a table, a tree, or logs.

  1. On the Thread Group screen, set Action to be taken after a Sampler error to Continue, Number of Threads to 5, Ramp-up period to 1 second, and Loop Count to 4. Then, click Run.
Figure 11.12 – JMeter – Thread Group configuration

Figure 11.12 – JMeter – Thread Group configuration

This configuration makes sure that JMeter posts five requests, waits for 1 second, and then continues the loop four times in total. Overall, it calls the API endpoint 20 times.

Setting Continue in the Thread Group configuration means it will continue testing even when some test fails.

  1. Right-click on Thread Group and select Add | Sampler | HTTP Request.
Figure 11.13 – JMeter – Sampler | HTTP Request

Figure 11.13 – JMeter – Sampler | HTTP Request

Here, we choose HTTP Request as we are going to invoke an HTTP-based web service for our testing. To test with a database, we would choose JDBC request.

  1. On the HTTP Request screen, provide the HTTP URL, method, and path details of our Mule application.
Figure 11.14 – JMeter – HTTP Request configuration

Figure 11.14 – JMeter – HTTP Request configuration

  • Right-click HTTP Request and select Add | Listener | View Results in Table.
Figure 11.15 – JMeter – adding a listener

Figure 11.15 – JMeter – adding a listener

Here, we select View Results in Table. We can choose one of the other options to view results in graph or tree format, based on our preferences.

  1. Click the Save button and choose the file location to save the test plan file.
  2. Click the Start button to start the test plan execution.
Figure 11.16 – JMeter – start the test

Figure 11.16 – JMeter – start the test

Now, we will be able to see that the test execution started and executed 20 times as per our Thread Group configuration.

Figure 11.17 – JMeter – test execution results

Figure 11.17 – JMeter – test execution results

In Figure 11.17, we can see that the overall execution was completed and each request took approximately a few milliseconds to get the web service response. This ensures our application is working fine with the workload of five concurrent requests for four loops. We can increase the workload by increasing the number of threads in our configuration to check the performance of our application.

With this, we have understood how to use JMeter to create and execute the test plan for load/performance testing.

Now, let’s explore MUnit in Anypoint Studio.

Introducing MUnit

MUnit is a unit testing framework to test a Mule application. It provides complete unit testing capabilities within Anypoint Studio. MUnit has two modules, MUnit and MUnit Tools.

The MUnit module has the following operations:

  • Set Event: To add a payload, variable, or attribute required for testing.
  • Set null payload: To add a null value for a payload during testing.
  • After Suite: This runs after all the test executions are completed. For example, if a test suite has 10 tests, then it gets executed just once after all 10 test executions are completed.
  • After Test: Runs after each test.
  • Before Suite: Runs only once before executing all the tests.
  • Before Test: Runs before each test.
  • Test: Used to create a new test.

The MUnit Tools module has the following operations to validate whether it is working as expected or not:

  • Assert equals: To check whether the payload value is equal to a specific value or not.
  • Assert expression: To check an evaluation based on a DataWeave expression.
  • Assert that: To check whether a payload value is equal to a specific value by using DataWeave functions. For example, MUnit matchers have a set of DataWeave functions to validate the conditions. The #[MunitTools::withMediaType('text/xml')] condition checks whether the expression’s media type is text/xml.
  • Clear stored data: To clear all stored data.
  • Dequeue: To remove the last event from the queue.
  • Fail: To fail the test with an assertion error.
  • Mock when: To mock the data when the flow calls the external system.
  • Queue: To store the value in a queue during testing. The queue gets cleared after the test execution is complete.
  • Remove: To remove the value of the specific key that is stored using the Store operation.
  • Retrieve: To retrieve the value of the specific key that is stored using the Store operation.
  • Run custom: To run the custom assertion.
  • Sleep: To create a delay during a test.
  • Store: To store the value against a key during a test. It is used for temporary storage. After the test, it is cleared.
  • Store oauth token: To store the OAuth token during the test.
  • Verify call: To verify whether the processor is called or not.
  • Spy: To see what happens before and after the processors.

Using MUnit, we can perform the following actions:

  • Create test suites and test cases
  • Perform testing
  • Check the code coverage after testing

Let’s try to create a test suite to execute our test cases.

Creating a test suite

As we have learned, a test suite is a collection of test cases. In order to create a test suite, we need to create a Mule application first. Instead of creating a Mule application from scratch, we will use the HelloWorld Mule application, which we developed in Chapter 3, Exploring Anypoint Studio. The application has HTTP Listener with the /hello endpoint, Logger to log the Welcome to Hello world application message, and Transform Message to output { message: "Hello World" }. If you did not create this Mule application earlier, then you can use the .jar file to import the application into Anypoint Studio using the File menu option. Select Import and then click Packaged mule application (.jar) to create the Mule application.

Now, let us create the test suite using MUnit:

  1. Open the Mule application, right-click on Flow, select MUnit, and choose Create blank test for this flow.
Figure 11.18 – MUnit – creating a blank test

Figure 11.18 – MUnit – creating a blank test

This creates a new test suite in /src/test/munit/. The test suite contains Behavior, Execution, and Validation. In the test suite, Behavior sets the input for the test suite. The Execution step calls the actual flow in the Mule application. In the Validation step, we can write any kind of condition to validate the Mule application output. We can also see two different modules (MUnit and MUnit Tools) added in Mule Palette, as shown in Figure 11.19:

Figure 11.19 – MUnit – test suite

Figure 11.19 – MUnit – test suite

  1. Drag and drop the Asset equals operation from the MUnit Tools module into the Validation section. Provide the actual and expected values, as shown in Figure 11.20:
Figure 11.20 – MUnit – test suite Validation

Figure 11.20 – MUnit – test suite Validation

What will happen is, when the test suite is run, the Execution section calls the HelloWorld flow and provides a result of { message: "Hello World" }. At this time, the Validation section compares the flow result with the expected value of Hello World, as mentioned in the Assert equals condition. If the value matches the actual result, then it will give a valid output response.

  1. In the canvas, right-click and select Run MUnit test suite. The application runs and is compared to the assert condition provided. If the condition matches, then it will show the success result, as shown in Figure 11.21:
Figure 11.21 – MUnit – test suite coverage report

Figure 11.21 – MUnit – test suite coverage report

If all the flow steps are executed in a Mule application, then it will show 100% coverage. As our test suite is successful, the Failures count is 0. This coverage report is also very useful when we have continuous integration (CI) and continuous deployment (CD) set up in MuleSoft. it checks the coverage report percentage to decide whether to continue with deployment to the target environment. For example, if the coverage is around 40%, then it will not proceed with deployment to the target environment. If the coverage is more than 80%, then it will proceed with deployment to the target environment.

  1. The previous scenario is for a successful test case. Now, let us try a failure scenario. In the Validation section, set the expected value as Hello World1 and try to run the test suite again. It fails as our expected value does not match the actual output of Hello World.
Figure 11.22 – MUnit – test suite MUnit Errors pane

Figure 11.22 – MUnit – test suite MUnit Errors pane

Also, here, it will show 100% coverage as it has executed all the steps in the Mule application. As this test suite failed, it shows the Failures count as 1 and it also shows more details about the error in the MUnit Errors view.

In the preceding example, we saw a Mule application with just one flow. If we have two flows in the same Mule application, then the test suite file will have two different flows to handle the unit test cases, as shown in Figure 11.23:

Figure 11.23 – MUnit – test suite with multiple flows

Figure 11.23 – MUnit – test suite with multiple flows

With this, we have understood how to create and run a test suite in Anypoint Studio.

Next, let’s explore the MUnit Test Recorder in Anypoint Studio.

Exploring the MUnit Test Recorder

It is time consuming to write each and every test suite manually. Hence, we will use the MUnit Test Recorder to create automated test suites and capture the input.

In a Mule application, we can use the Record test for this flow option in the flow to create the required test suites and also start the recording to capture the inputs. The input can be query parameters, URI parameters, or the request payload.

The MUnit Test Recorder only automatically creates the test suite for successful scenarios. For any failure test scenarios or additional conditions, we need to create additional test cases manually. We can debug the test suites by adding breakpoints.

Let’s create a test suite using the Test Recorder.

Creating a test suite using the Test Recorder

In this section, we will learn how to create test suites in a Mule application using the MUnit Test Recorder:

  1. Open the HelloWorld Mule application, right-click on the flow, and select MUnit and then Record test for this flow.

It starts the Mule runtime in Anypoint Studio and deploys the application. Once the application is deployed, it shows the DEPLOYED status. A dialog box titled Test Recorder with the message Waiting for input data appears, as shown in Figure 11.24:

Figure 11.24 – MUnit – Test Recorder “Waiting for input data…”

Figure 11.24 – MUnit – Test Recorder “Waiting for input data…”

In order to capture the input, let us send a request from Postman to our Mule application endpoint.

  1. Open the Postman application, set the URL endpoint as localhost:8081/hello, and click Send.
Figure 11.25 – Postman – sending a request to a Mule application

Figure 11.25 – Postman – sending a request to a Mule application

Once you send the request, it reaches the Mule application and executes all the steps there. We can see the Logger message in the console, which confirms the execution of the Mule application. Finally, it saves/records the input/output, as shown in the Test Recorder pop-up dialog box in Figure 11.26:

Figure 11.26 – Test Recorder – “Input recorded”

Figure 11.26 – Test Recorder – “Input recorded”

  1. Click Configure test to create the test suites.

Once we click Configure test, the New Recorded Test Welcome dialog box appears.

  1. On the Welcome screen, leave File name and Test name as is and click Next.
Figure 11.27 – New Recorded Test Welcome screen

Figure 11.27 – New Recorded Test Welcome screen

In Figure 11.27, we have given the default values for the test suite’s filename and test name, but we can change it to any name as per our naming convention. Upon clicking Next, the Configure Test dialog box appears.

Figure 11.28 – New Recorded Test Configure Test

Figure 11.28 – New Recorded Test Configure Test

It shows the name of the Mule application and its flow steps. The Set input and assert output section shows the input and output captured and recorded during an earlier run from Postman. The Flow Input tab captures the input received through attributes and the payload. As we have sent the request as a get method, we don’t have any input payloads in the Flow Input section.

Figure 11.29 – Flow Input

Figure 11.29 – Flow Input

We can use these as test suite input. Now, let us see what information is available in the Flow Output tab.

The Flow Output tab captures the output (attributes and payload) received after the execution of the Mule application.

Figure 11.30 – Flow Output

Figure 11.30 – Flow Output

We can use these output values for validation in our test suite.

  1. Click the Next button on the New Recorded Test -> Configure Test screen (see Figure 11.28).
  2. Click Finish on the Test Summary screen (see Figure 11.31) to complete the test suite creation using the MUnit Test Recorder:
Figure 11.31 – New Recorded Test Test Summary

Figure 11.31 – New Recorded Test Test Summary

Now our test suite is created successfully with all the supported test suite .xml and .dwl files.

The test suite has three steps: Behavior, Execution, and Validation.

The Behavior step sets the inputs that we captured earlier using the MUnit Test Recorder. This input data will be taken from the set-event-attributes.dwl and set-event-payload.dwl files, as shown in Figure 11.32.

set-event-attributes.dwl and set-event-payload.dwl have all the attributes data and payload data, respectively. As it is a get method, the set-event-payload.dwl file will be empty. If it were a post method, then it would have had input data in JSON or XML or any other format captured in a .dwl file.

Figure 11.32 – MUnit test suite Behavior

Figure 11.32 – MUnit test suite Behavior

The Execution step calls the flow and passes all the values received from the Behavior step.

The Validation step compares the Mule application output against the captured output that is available in the recorded file (assert_expression_payload.dwl).

Figure 11.33 – Recorded output payload

Figure 11.33 – Recorded output payload

If validation fails, then the Validation step in the test suite will throw an error with the message The payload does not match.

Figure 11.34 – MUnit test suite Validation

Figure 11.34 – MUnit test suite Validation

With this, we have understood how to record input and output in a Mule application and use those captured inputs and outputs in our tests using the MUnit Test Recorder.

Summary

In this chapter, we had a look at the basics of testing and the various types of testing tools that are available.

We created a Mule application using a .jar file, tried to create a test case using the MUnit framework, and carried out tests using MUnit.

We also saw how the MUnit Test Recorder helps to create test cases automatically.

On completing this chapter, you expanded your knowledge of how to test a Mule application using the MUnit testing framework and I am sure that you are now confident enough to test your own Mule application.

In the next chapter, Chapter 12, MuleSoft Integration with Salesforce, we’ll explore one of the ways to integrate with Salesforce using the Salesforce connector.

Assignments

Try the following assignments to explore more on MUnit.

  • Download the Examples asset (Testing APIKit with MUnit and Unit Testing with MUnit – Tutorial) from Anypoint Exchange in Anypoint Platform (https://anypoint.mulesoft.com/exchange/), and import it into Studio, and practice MUnit testing with this asset
  • Explore different MUnit Tools operations that are available in your test cases
  • Explore the limitations of MUnit

Questions

Take a moment to answer the following questions to serve as a recap of what you just learned in this chapter:

  1. What are the different tools available for load/performance testing?
  2. What is MUnit?
  3. What is MUnit Test Recorder?
  4. When would we use Mock When operations in MUnit Tools?
  5. What are Sleep operations in MUnit Tools?
  6. In which path are MUnit test cases stored?

Answers

  1. There are many tools available in the market for load/performance testing. For example: JMeter, SoapUI, LoadRunner, and Gatling.
  2. MUnit is a unit testing framework for testing Mule applications.
  3. The MUnit Test Recorder is a tool to record inputs and outputs in a flow. It also creates automated test suites and those captured inputs and outputs can be used for your tests.
  4. Mock When is used to mock the data when the flow calls the external system. When unit testing, we cannot expect the external system to be available for our tests. Instead of calling the external system, we must mock the data to continue our testing. To achieve this, we need to use Mock When MUnit Tools operations.
  5. Sleep operations help to create a delay during a test. For example, during test execution, to wait 10 seconds before proceeding with the next step, we can use a Sleep operation. Here, 10 is the time value and seconds is the time unit, which are configurable in the Sleep operation.
  6. MUnit test cases are stored in /src/test/munit/ under the Mule application package.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset