This chapter covers
Application development is not an easy and carefree process. Even with careful implementation and checking, software bugs can slip through and put your company or your users at risk. In the past couple of decades, bug prevention and software testing have become imperative. As the old English proverb says, an ounce of prevention is worth a pound of cure.
Now with serverless, software testing seems to have gained a new layer of complexity. Having no server configuration, along with using AWS Lambda and API Gateway, can make testing your applications look scary. This chapter’s goal is to show how, with just a minor change to your application testing approach, you can test your serverless applications as easily as you did those that were server-hosted.
Recently, Aunt Maria has noticed that pizza ordering occasionally doesn’t work for some customers—and Pierre, her mobile developer, has been reporting “ghost” bugs, sometimes even when displaying the pizza list. Aunt Maria is worried that she’s losing some customers and has asked you to take a look. You can try to debug the Pizza API to find out where the issue is, but the bug might also be on the website or in the mobile app. Testing all the services manually each time an issue occurs is tedious, repetitive, and takes too long. The solution is to automate testing. Automated testing requires an initial investment to write the code that will test your application, but then you can rerun it to check your Pizza API whenever you alter its functionality, add a new feature, or encounter a new issue.
Automated testing is a big field, and there are many different automated test types, each taking a different approach: from testing small pieces (or units) of application code to complete application features and their behavior.
Taking your Pizza API as an example, the smaller unit tests will test just the execution of a single function within your pizza handlers, whereas the complete application tests (also known as end-to-end, or E2E, tests) will check the whole pizza listing and ordering flow from Aunt Maria’s website.
Many more types of automated tests exist. They are often grouped into three layers based on their approach, from bottom to top:
In addition to those three automated test layers, there is another layer of manual testing, usually performed by Quality Assurance teams.
These testing layers have different test costs. A visual representation of the layers, along with their corresponding costs, is often called a testing pyramid. Usually, the testing pyramid consists of only the three automated test layers, but to gain a better understanding of the value and cost of each test type, you can also add the manual testing layer to the picture. With all four layers combined, the test pyramid looks like figure 11.1. The costs in the figure are based on testing server-hosted applications.
The figure shows that higher-level UI tests are more expensive than unit tests, because they test the whole application’s behavior from the user’s perspective, including visual details such as properly set inputs, displayed values, and so on. Besides being more expensive, the UI tests are also significantly slower because of the quantity of checks and the sheer volume of code executed.
In server-hosted applications, running automated tests usually requires a separate testing server, because you don’t want to run the tests based on your production data. As a result, a big chunk of the server-hosted testing costs are infrastructure-related. That includes setting up a server with a setup identical to your production application, importing database data, developer time expended, and so on.
With serverless, the test-running costs are substantially reduced, mostly because there are no servers or server configuration. As a result, less developer time is invested. That reclaimed time can be used for more tests and more coverage. An updated testing pyramid for serverless applications, showing the difference in test costs, is presented in figure 11.2. We call this the serverless testing pyramid.
Developing serverless applications is great, because you don’t have to worry about infrastructure. But from a testing perspective, that benefit now becomes a problem. Having no control over infrastructure requires you to rethink how to test. At first glance, you might think having no control over infrastructure means having no responsibility for whether AWS services are up or down, or if there is an AWS service down, or for network disconnects. But that instinct would be wrong. Even though you don’t have control over infrastructure, that doesn’t mean you’re no longer responsible if it fails. Your customers won’t know the difference between an AWS service malfunction and your application crashing. Youll be held responsible, and at the very least, you’ll still need to check if your application is handling those cases well.
The following step-by-step approach can help you remember those cases while writing tests. Some of you might be using it in a different form already:
A concern represents a single function or a single piece of code responsible for one operation. In our example case, that might be calculating the discount for a pizza order.
It’s like checking how a discounted price affects the amount you’re charging the customer credit card.
An end-to-end workflow represents one complete feature workflow available within your application. An example of this is loading your site, listing pizzas, choosing one, ordering it, and paying for it. Listing all the workflows will give you a better and more complete overview of the application.
This approach might seem logical, but the quantity of bugs in software applications nowadays tells us that something being logical doesn’t mean that it’s common practice.
A serverless Node.js application is still a Node.js application, and that means the tools you use for testing any other Node.js application will work for the Pizza API. This chapter uses Jasmine, which is one of the most popular Node.js testing frameworks, but you can use others, such as Mocha, Tape, or Jest.
Jasmine tests are called specs, so we’ll use the same name in the rest of this chapter. Spec is a JavaScript function that defines what a piece of your application should do. Specs are grouped in suites, which allow you to organize your specs. For example, if you’re testing a form, you can have a validation suite, in which you’ll group all specs related to form validation.
Jasmine uses runner to run your specs. You can either run all of your specs or filter them and run a specific spec or a specific suite. Before writing tests, you need to prepare your project for unit testing. To do so, you’ll need to create a folder where you’ll save your specs, and then to create a runner that will run your specs.
To follow Jasmine’s naming convention, create a spec
folder in your Pizza API project. This folder will contain all the specs for your Pizza API, including unit and integration specs. It will also include a configuration for Jasmine runner and some helpers, such as a helper for mocking HTTP requests. The application folder structure, with the specs you’ll create in this chapter, is shown in figure 11.3.
To configure your Jasmine runner, create a support folder in the specs folder of your Pizza API project. Inside that folder, create a jasmine.json file. This file represents runner configuration.
As shown in the following listing of this configuration, you need to define the location of your specs relative to the project root, and the pattern Jasmine will use to find spec files. In your case, it should be any file with a name that ends in “spec.js” or “Spec.js.”
Listing 11.1 Jasmine configuration
{
"spec_dir": "specs", ①
"spec_files": [
"**/*[sS]pec.js" ②
]
}
Next, define how Jasmine will run. You want to configure it to run with the configuration from the jasmine.json file and to give you an option to run only a specific spec or spec suite. And finally, you want it to run in verbose mode and to print the description of each spec as it runs.
To do so, create another file named jasmine-runner.js in the same folder, and open it with your favorite editor.
At the beginning of the file, require jasmine
and SpecReporter
from the jasmine-spec-reporter
NPM package. Then create an instance of Jasmine.
The next step is to loop through the arguments passed in the command line. You can ignore the first two arguments because they are the path to Node.js and to your current file. For each remaining argument, check if they are full
text and, if so, show the Jasmine spec reporter instead of the default reporter. If the argument is a filter, run only the specs that contain the provided filter.
Finally, load the configuration by using Jasmine’s loadConfigFile
method and launch the Jasmine runner with the provided filters.
Your jasmine-runner.js file should look like the following listing.
Listing 11.2 Jasmine runner
'use strict'
const SpecReporter = require('jasmine-spec-reporter').SpecReporter ①
const Jasmine = require('jasmine') ②
const jrunner = new Jasmine() ③
let filter ④
process.argv.slice(2).forEach(option => { ⑤
if (option === 'full') { ⑥
jrunner.configureDefaultReporter({ print() {} })
jasmine.getEnv().addReporter(new SpecReporter())
}
if (option.match('^filter=')) ⑦
filter = option.match('^filter=(.*)')[1]
})
jrunner.loadConfigFile() ⑧
jrunner.execute(undefined, filter) ⑨
At this point, you can run your specs using the node spec/support/jasmine-runner.js
command. This will print the spec results to your terminal, with a green dot for each passing spec. To see the spec messages instead of green dots, you can run the node spec/support/jasmine-runner.js full
command.
To simplify running your specs, you can add an NPM test
script to your package.json file. This modification allows you to use the shorthand “test” to run your specs with the command npm test
, or the even shorter npm t
command. Add the following script to the package.json file:
"test": "node specs/support/jasmine-runner.js”
To run the specs with full message output, run the command npm t-- full
. The --
is required and must be followed by a space, because the options after it—in this case full
—are not NPM options. Instead, they are passed to Jasmine directly.
The foundation of the testing pyramid is the unit layer, which consists of unit tests. The goal of unit testing is to isolate each part of the application and show that the individual parts are working as expected.
The unit size depends on the application; it can be as small as a function or as large as a class or entire module. The smallest unit of code of the Pizza API that makes sense to isolate and test is a handler function. You can start with the GetPizzas
handler.
The only external connection in the getPizzas
handler is the connection to the pizzas.json file. Even though this is a static file, it represents an external connection that shouldn’t be tested in a unit test. To prepare the handler for unit testing, you need to allow the handler function to receive a custom list of pizzas that will overwrite the list from pizzas.json. By doing this, you ensure your unit test will still work if the pizzas.json file is changed.
As shown in the following listing, you can do that by adding the pizzas
parameter to your getPizzas
handler, which defaults to the content of the pizzas.json file.
Listing 11.3 Updated getPizzas
handler
'use strict'
const listOfPizzas = require('../data/pizzas.json') ①
function getPizzas(pizzaId, pizzas = listOfPizzas) { ②
Now that your handler is ready for testing, you can start writing specs. To do so, create a file named get-pizzas.spec.js in the spec/handlers folder.
In this file, require your handler and create an array of pizzas. It should contain at least two pizzas with names and IDs, and it can look like the following code snippet:
const pizzas = [{
id: 1,
name: 'Capricciosa'
}, {
id: 2,
name: 'Napoletana'
}]
Now describe your spec using Jasmine’s describe
function. The description should be short and easy to understand; for example:
describe('Get pizzas handler', ()
=> { ... . })
Your describe
block should contain multiple specs. For a simple function, such as the getPizzas
handler, you should test the following:
Each spec is a separate block defined by invoking the it
function. This function accepts two parameters: the spec description and a function that defines your spec. Remember: descriptions should be short but clear, so you can easily understand what is being tested.
Each spec contains one or more expectations that test the state of the code. Expectations are actually the verifications, where you define what you expect the current value to be, compared with what that value is. Expectations are defined using expect
statements.
In your first spec, you want to check that the handler returns a list of all pizzas when a pizza ID is not provided. To do so, you need to invoke the handler without the first argument, but you also need to provide a list of pizzas as a second argument. You can do this by passing undefined
and a list of pizzas to the handler, respectively. The spec can look like the following snippet:
it('should return a list of all pizzas if called without pizza ID', () => {
expect(underTest(undefined, pizzas)).toEqual(pizzas)
})
To test the code with the existing pizza IDs, you should pass the IDs—1
and 2
, respectively—and the list of pizzas, and expect the results to be equal to the first and second pizzas from your mocked array of pizzas. Your spec can look like this:
it('should return a single pizza if an existing ID is passed as the first parameter', () => {
expect(underTest(1, pizzas)).toEqual(pizzas[0])
expect(underTest(2, pizzas)).toEqual(pizzas[1])
})
For the last spec in the unit tests for the getPizzas
handler, you can be as creative as you want in passing a nonexistent ID. For example, you should pass some edge cases such as numbers smaller and larger than the existing IDs, but you should also try to test some other values, such as strings or even other types.
The following example shows what your spec might look like:
it('should throw an error if nonexistent ID is passed', () => {
expect(() => underTest(0, pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest(3, pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest(1.5, pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest(42, pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest('A', pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest([], pizzas)).toThrow('The pizza you requested was not found')
})
Putting all this together, the following listing shows what your unit tests for the getPizzas
handler should look like.
Listing 11.4 Unit tests for getPizzas
handler
'use strict'
const underTest = require('../../handlers/get-pizzas') ①
const pizzas = [{ ②
id: 1,
name: 'Capricciosa'
}, {
id: 2,
name: 'Napoletana'
}]
describe('Get pizzas handler', () => { ③
it('should return a list of all pizzas if called without pizza ID', () => { ④
expect(underTest(undefined, pizzas)).toEqual(pizzas) ⑤
})
it('should return a single pizza if an existing ID is passed as the first parameter', () => { ⑥
expect(underTest(1, pizzas)).toEqual(pizzas[0])
expect(underTest(2, pizzas)).toEqual(pizzas[1])
})
it('should throw an error if nonexistent ID is passed', () => { ⑦
expect(() => underTest(0, pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest(3, pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest(1.5, pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest(42, pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest('A', pizzas)).toThrow('The pizza you requested was not found')
expect(() => underTest([], pizzas)).toThrow('The pizza you requested was not found')
})
})
Navigate to your project folder and run the npm test
command from the terminal. The output after running this command, as shown in the following listing, will indicate that the spec failed.
Listing 11.5 Response after running specs
> node spec/support/jasmine-runner.js
Started
..F
Failures:
1) Get pizzas handler should throw an error if nonexistent ID is passed
Message:
Expected function to throw an exception.
Stack:
Error: Expected function to throw an exception.
at UserContext.it (~/pizza-api/spec/handlers/get-pizzas-spec.js:26:40)
3 specs, 1 failure
Finished in 0.027 seconds
The failed spec prevents that bug from being deployed to the AWS Lambda function and creating an issue in production. It is important to test edge cases in your unit specs, because they can save you a lot of debugging from CloudWatch logs.
In this case, zero is passed as a pizza ID, and the getPizzas
handler returned a list of all pizzas instead of an error, because zero is a false value in JavaScript and it will not pass the following part of the getPizzas
handler:
if (!pizzaId)
return pizzas
To fix this problem, update the problematic part of the getPizzas
handler to check for an undefined pizzaId
. For example, you can replace it with the following code:
if (typeof pizzaId === 'undefined')
return pizzas
After updating your getPizzas
handler, rerun the specs using the npm test
command. The specs should pass now, and the output should look like the following listing.
Listing 11.6 Response after running specs that are passing
> node spec/support/jasmine-runner.js
Started
...
3 specs, 0 failures
Finished in 0.027 seconds
Passed specs don’t guarantee that your code is bug-free, but if meaningful specs are included in your code coverage, the number of production issues will be significantly lower. But how do you unit test handlers that can’t be isolated easily—for example, handlers that have a direct connection to the DynamoDB table? That’s where mock functions prove effective.
In contrast to the getPizzas
handler, most of the other handlers in the Pizza API interact with the database or send HTTP requests. To test those handlers in isolation, you’ll need to mock all external interaction.
Mocking, primarily used in unit testing, refers to creating objects that simulate the behavior of real objects. Using mocks—instead of the external objects and functions the handler being tested interacts with—allows you to isolate the behavior of the handler.
Let’s try testing a more complex handler, such as createOrder
. Two things require mocking in the createOrder
handler:
DocumentClient
, because you want to isolate the test of the getPizzas
handler from any dependency. If you test the fully integrated handler, you would need to set up a test database to test handler validation.Mocking is important because unit specs are much faster to run than integration and end-to-end specs. Running your full spec suite takes a few seconds, instead of minutes or even hours in more complex systems. Also, unit specs are also much cheaper, because you don’t need to pay for infrastructure when you want to check if your handler logic is working as expected.
After mocking HTTP requests and DynamoDB communication, the handler you’re testing should work as described in figure 11.4.
To create a unit spec for the createOrder
handler, create a file named create-order.spec.js in the specs/handlers folder of your Pizza API project. Then require this handler at the top of your spec file and add a Jasmine describe
block, because you want to group your specs so you can easily read your Jasmine runner output.
At this point, your spec file should look like this:
const underTest = require('../../handlers/create-order')
describe('Create order handler', () => {
// Place for your specs
})
Now let’s mock the HTTP request. There are many ways to do that in Node.js. For example, you can use a full-featured module for mocking, such as Sinon (http://sinonjs.org) or Nock (https://github.com/node-nock/nock), or even write your own.
In the spirit of Node.js and serverless development, we always recommend using small and focused modules, and fake-http-request
is exactly that—a small Node.js module that mocks HTTP and HTTPS requests. You can install the module from NPM and save it as a development dependency by running the npm install fake-http-request --save-dev
command.
In your new unit test, require the https
module at the top of the file, too, because the fake-http-request
module uses it for tracking mocked HTTP requests.
To use the fake-http-request
module, you’ll need to use Jasmine’s beforeEach
and afterEach
functions, which allow you to do something before and after each spec is executed. To install and uninstall the module, add the following snippet inside the describe
block of your spec file:
beforeEach(() => fakeHttpRequest.install('https'))
afterEach(() => fakeHttpRequest.uninstall('https'))
Now that HTTPS requests are mocked, you need to mock the AWS DocumentClient
. To do so, you’ll need to require aws-sdk
and then replace the DocumentClient
class with a Jasmine spy. Remember to bind the Promise.resolve
function; otherwise it’ll have a different this
and fail.
Because the AWS SDK uses a prototype to create the DocumentClient
class, you can replace the DocumentClient
with your Jasmine spy by adding the following to the beforeEach
block:
AWS.DynamoDB.DocumentClient.prototype = docClientMock
At this point, your create-order.spec.js file should look like the following listing.
Listing 11.7 Base of the createOrder
handler unit test
'use strict'
const underTest = require('../../handlers/create-order') ①
const https = require('https') ②
const fakeHttpRequest = require('fake-http-request')
const AWS = require('aws-sdk') ③
let docClientMock ④
describe('Create order handler', () => {
beforeEach(() => {
fakeHttpRequest.install('https') ⑤
docClientMock = jasmine.createSpyObj('docClient', { ⑥
put: { promise: Promise.resolve.bind(Promise) }, ⑦
configure() { }
})
AWS.DynamoDB.DocumentClient.prototype = docClientMock ⑧
})
afterEach(() => fakeHttpRequest.uninstall('https')) ⑨
// Place for your specs
})
Because the createOrder
handler is more complex than the getPizzas
handler, it requires more specs. To start with the most important parts, you should test the following:
POST
request to the Some Like It Hot Delivery APIDocumentClient
to save an order only if the Some Like It Hot Delivery API request was successfulDocumentClient
requests were successfulBut you can add even more specs and test additional edge cases. To keep the page count of this chapter reasonable, we show you only the most important ones, and you can see a complete create-order.spec.js with all the important specs in the source code that goes with the book.
For the first spec, add an it
block that will check if a POST
request is sent to the Some Like It Hot Delivery API. Try to use a short and easily understood description; for example, “should send POST request to Some Like It Hot Delivery API.”
In this spec, you want to invoke the createOrder
handler with valid data, and then use the https
module to see if the request is sent with the expected body and headers.
fake-http-request
adds a pipe
method to https.request
, so you can use that method to check if the HTTPS request is sent with the expected values. For example, you can check if the number of sent requests is 1, because only one API request should be sent to the Some Like It Hot Delivery API. Also, you can check if the options passed to https.request
were correct, including the method, path, body, and headers.
Your spec should look like listing 11.8.
Listing 11.8 Mocking a POST request
it('should send POST request to Some Like It Hot Delivery API', (done) => { ①
underTest({ ②
body: {
pizza: 1,
address: '221b Baker Street'
}
})
https.request.pipe((callOptions) => { ③
expect(https.request.calls.length).toBe(1) ④
expect(callOptions).toEqual(jasmine.objectContaining({ ⑤
protocol: 'https:',
slashes: true,
host: 'some-like-it-hot-api.effortlessserverless.com',
path: '/delivery',
method: 'POST',
headers: {
Authorization: 'aunt-marias-pizzeria-1234567890',
'Content-type': 'application/json'
},
body: JSON.stringify({ ⑥
pickupTime: '15.34pm',
pickupAddress: 'Aunt Maria Pizzeria',
deliveryAddress: '221b Baker Street',
webhookUrl: 'https://g8fhlgccof.execute-api.eu-central-1.amazonaws.com/latest/delivery'
})
}))
done() ⑦
})
})
The next important test is whether the DocumentClient
is invoked after a successful HTTP request. To test that, you need to simulate a successful response from the Some Like It Hot Delivery API by adding an https.request.calls[0].respond(200, 'Ok', '{}')
line in the https.request.pipe
method.
Because the createOrder
handler returns a promise, you can use .then
to check if the DocumentClient
mock was invoked.
Remember to add done()
after the expect
statement, and also to invoke done.fail()
if the promise was rejected; otherwise, your specs will run until Jasmine times out and fail.
The spec for testing the DocumentClient
invocation should look like the following listing.
Listing 11.9 Testing DocumentClient
invocation
it('should call the DynamoDB DocumentClient.put
if Some Like It Hot Delivery API request was successful', (done) => {
underTest({ ①
body: { pizza: 1, address: '221b Baker Street' }
})
.then(() => {
expect(docClientMock.put).toHaveBeenCalled() ②
done() ③
})
.catch(done.fail) ④
https.request.pipe((callOptions) => https.request.calls[0].respond(200, 'Ok', '{}')) ⑤
})
Another similar spec should show you that the DocumentClient
mock was never invoked if the HTTP request failed. The differences between this spec and the previous one are
docClientMock.put
has not been called.fake-http-request
library should return an error (with an HTTP status code greater than or equal to 400).The spec for making sure that the DocumentClient
mock is not invoked after a failed HTTP request might look like the following listing.
Listing 11.10 Testing that DocumentClient
mock is not invoked if HTTP request fails
it('should not call the DynamoDB DocumentClient.put
if Some Like It Hot Delivery API request was not successful', (done) => {
underTest({
body: { pizza: 1, address: '221b Baker Street' }
})
.then(done.fail) ①
.catch(() => {
expect(docClientMock.put).not.toHaveBeenCalled() ②
done()
})
https.request.pipe((callOptions) => https.request.calls[0].respond(500, 'Server Error', '{}')) ③
})
If you run the npm test
or npm t
command, the specs should run successfully.
Integration tests are another test type; they are even more important for serverless functions that are larger than a few lines of code. Unlike unit tests, integration tests use real integrations with other parts of your system. But they still can and should mock some third-party libraries that you don’t control. For example, you don’t want your automated tests to interact with a payment processor.
As shown in figure 11.5, integration tests of the createOrder
handler would still mock the Some Like It Hot Delivery API. Sending an HTTP request to a third-party API can affect real-world users, but it would have a real integration with the DynamoDB table prepared for testing.
The flow of the integration tests for the createOrder
handler is as follows:
Because there are just a few handlers, you can have both unit and integration tests in the same folder. Just make sure you name them in such a way that you can understand the difference easily. For example, integration tests for the createOrder
handler can be in the create-order-integration.spec.js file.
As shown in the next listing, preparation for integration testing of the createOrder
handler involves a few steps.
The first step is to require all the modules you need, such as the handler that you’re testing and the aws-sdk
(because you need the DynamoDB
class), https
, and fake-http-request
modules.
Then you need to generate the name for your test, DynamoDB
. You could use the same name each time, but a generated name will have a better chance of being unique. You also need to increase Jasmine’s timeout to at least one minute, because creating and deleting a DynamoDB table can take a while, and the initial five-second timeout is not long enough.
Next, you need to create a DynamoDB table before all tests using Jasmine’s beforeAll
function. Keep in mind that the creation of a DynamoDB table is asynchronous, so you’ll need to use the done
callback to tell Jasmine when the operation is finished. If you don’t do that, spec execution will start before the table is ready.
You can use the createTable
method from the DynamoDB
class for this. It needs to have the same key definitions as your pizza-orders
table, which means that it needs to have orderId
as a hash key.
Because the createTable
promise will resolve before the DynamoDB table is ready, you can use the waitFor
method of the AWS SDK’s DynamoDB
class to be sure that the table exists before invoking the Jasmine done
callback.
For deleting the table in Jasmine’s afterAll
function, the flow should be similar: delete the table using the deleteTable
method of the DynamoDB
class, and then use the waitFor
method to be sure that the table is deleted. Finally, you invoke the done
callback.
Mocking of HTTP requests to the Some Like It Hot Delivery API is similar to the mocking you did for unit tests. The only difference is that you want to mock only HTTP requests to this particular API; you want to allow other HTTP requests, because the DynamoDB
class uses them to interact with the AWS infrastructure. To do so, you can pass an object that contains the request type—in your case, https
—and regex matcher for the domain name to the fakeHttpRequest.install
function.
At this point, your create-order-integration.spec.js file should look like the next listing.
Listing 11.11 Preparation for integration test for createOrder
handler
'use strict'
const underTest = require('../../handlers/create-order') ①
const AWS = require('aws-sdk') ②
const dynamoDb = new AWS.DynamoDB({ ③
apiVersion: '2012-08-10',
region: 'eu-central-1'
})
const https = require('https') ④
const fakeHttpRequest = require('fake-http-request')
const tableName = `pizzaOrderTest${new Date().getTime()}` ⑤
jasmine.DEFAULT_TIMEOUT_INTERVAL = 60000 ⑥
describe('Create order (integration)', () => {
beforeAll((done) => {
const params = {
AttributeDefinitions: [{
AttributeName: 'orderId',
AttributeType: 'S'
}],
KeySchema: [{
AttributeName: 'orderId',
KeyType: 'HASH'
}],
ProvisionedThroughput: {
ReadCapacityUnits: 1,
WriteCapacityUnits: 1
},
TableName: tableName
}
dynamoDb.createTable(params).promise() ⑦
.then(() => dynamoDb.waitFor('tableExists', { ⑧
TableName: tableName
}).promise())
.then(done)
.catch(done.fail)
})
afterAll(done => {
dynamoDb.deleteTable({ ⑨
TableName: tableName
}).promise()
.then(() => dynamoDb.waitFor('tableNotExists', { ⑩
TableName: tableName
}).promise())
.then(done)
.catch(done.fail)
})
beforeEach(() => fakeHttpRequest.install({ ⑪
type: 'https',
matcher: /some-like-it-hot-api/
}))
afterEach(() => fakeHttpRequest.uninstall('https'))
// Place for your specs
})
Now that you have integration tests set up, you need to update the createOrder
handler to be able to receive the DynamoDB table name dynamically. You can do that by passing the table name as a second argument, or you can set the table name as an environment variable.
The easiest way is to pass the table name as the second argument. To do, so update your createOrder
handler to accept the DynamoDB table name, but remember to set pizza-orders
as the default value, so you don’t break the existing code. Your createOrder
handler function arguments should look like this:
function createOrder(request, tableName = 'pizza-orders') {
The last but most difficult step is to add the integration specs. The specs should check all the critical parts of the integration of your handler with any other part of the system or the infrastructure.
To keep the length of this chapter reasonable, we show you just the most important spec, which tests whether the data is written to the database as expected. You can see the full create-order-integration.spec.js file with more specs in the source code.
As shown in listing 11.12, to test if the order was saved to the database after the Some Like It Hot Delivery API response, you need to do the following:
createOrder
handler with the valid data and the test DynamoDB table name.deliveryId
.createOrder
handler promise is resolved, use the DynamoDB
class instance to query the database for the item with the ID received from the Some Like It Hot Delivery API.dynamoDb.getItem
is correct.Listing 11.12 Testing if order is saved to the DynamoDB table
it('should save the order in the DynamoDB table
if Some Like It Hot Delivery API request was successful', (done) => {
underTest({
body: { pizza: 1, address: '221b Baker Street' }
}, tableName) ①
.then(() => {
const params = {
Key: {
orderId: {
S: 'order-id-from-delivery-api'
}
},
TableName: tableName
}
dynamoDb.getItem(params).promise() ②
.then(result => {
expect(result.Item.orderId.S).toBe('order-id-from-delivery-api') ③
expect(result.Item.address.S).toBe('221b Baker Street')
expect(result.Item.pizza.N).toBe('1')
done()
})
})
.catch(done.fail) ④
https.request.pipe((callOptions) => https.request.calls[0].respond(200, 'Ok', JSON.stringify({ ⑤
deliveryId: 'order-id-from-delivery-api'
})))
})
If you run the npm test
command again, you’ll notice that it takes more time, but it should show all tests as passed, including your integration test.
You can check the AWS Web Console to make sure that the DynamoDB table is deleted successfully. Even after you’ve added a few more integration tests, your monthly AWS bill for the application built in this book should still be just a few cents.
You’ve seen that unit and integration tests in serverless apps are similar to the same tests in non-serverless Node.js applications. As expected, the major impact is on the speed of setting up the infrastructure copy for the tests (setup is fast because there’s no server configuration to do) and the price of the infrastructure (you don’t have to pay for it when it’s not in use).
There are many other types of automated tests, and serverless architecture affects some of them. For example, running load and stress tests doesn’t make sense in a serverless architecture that is auto-scalable within documented limits. This applies unless your application is not fully serverless or you don’t trust your serverless provider, which is a problem beyond the scope of this book.
Another type of automated test that can be affected by serverless architecture is GUI tests. It might not sound intuitive, but despite serverless being mostly focused on infrastructure, it can speed up GUI tests with its parallel execution and headless browsers, such as headless Chrome and Phantom.js. Headless browsers are regular web browsers, but they don’t have a graphical user interface; instead, you run them from the command line. The ability to run automated GUI tests on Google Chrome running on AWS Lambda has already resulted in a lot of new tools that simplify GUI tests. But even more importantly, those tools speed up the tests by an order of magnitude and drop the price drastically. One of the tools that allows you to run GUI tests on AWS Lambda is Appraise, a visual-approval testing tool that uses headless Chrome to take a screenshot and then compares the screenshot with the expected output. To learn more about Appraise, visit http://appraise.qa.
So far, you’ve learned the basics of testing serverless applications, but that doesn’t mean you’ve covered all the potential edge cases. Let’s take the example of our pizza order-saving handler.
Listing 11.13 The current pizza order-saving handler
function createOrder(request, tableName) {
tableName = tableName || 'pizza-orders'
const docClient = new AWS.DynamoDB.DocumentClient({ ①
region: process.env.AWS_DEFAULT_REGION
})
let userAddress = request && request.body && request.body.address;
if (!userAddress) {
const userData = request && request.context && request.context.authorizer
&& request.context.authorizer.claims;
if (!userData)
throw new Error()
// console.log('User data', userData)
userAddress = JSON.parse(userData.address).formatted ②
}
if (!request || !request.body || !request.body.pizza || !userAddress)
throw new Error('To order pizza please provide pizza type and address where
pizza should be delivered') ③
return rp.post('https://some-like-it-hot-api.effortlessserverless.com/delivery', { ④
headers: {
Authorization: 'aunt-marias-pizzeria-1234567890',
'Content-type': 'application/json'
},
body: JSON.stringify({
pickupTime: '15.34pm',
pickupAddress: 'Aunt Maria Pizzeria',
deliveryAddress: userAddress,
webhookUrl: 'https://g8fhlgccof.execute-api.eu-central-1.amazonaws.com/latest/delivery',
})
})
.then(rawResponse => JSON.parse(rawResponse.body))
.then(response => {
return docClient.put({ ⑤
TableName: tableName,
Item: {
cognitoUsername: userAddress['cognito:username'],
orderId: response.deliveryId,
pizza: request.body.pizza,
address: userAddress,
orderStatus: 'pending'
}
}).promise()
})
.then(res => {
console.log('Order is saved!', res)
return res ⑥
})
.catch(saveError => {
console.log(`Oops, order is not saved :(`, saveError)
throw saveError
})
}
This service looks fine—it’s your handler for storing pizza orders. It is properly structured in a separate file and is straightforward and simple. It also doesn’t do multiple things at the same time—but there is a catch. As you’ve seen, it’s almost impossible to automatically test without invoking AWS DynamoDB. Even though this seems to be a good solution, you aren’t covering all the edge cases. For example, what if one part of the AWS DynamoDB service changes abruptly and you don’t manage to follow up? Or what if the DynamoDB service crashes? These conditions may occur only rarely, but taking the risks out of the equation is important. In addition to these, there are many more risks to consider. They can be categorized into four types. You may be wondering what sorts of risks those types cover, so here’s a short list for the example of storing a single pizza order to DynamoDB:
You could test each of these as you did for the integration tests, but setting up and configuring the service each time you want to test for one of these risks isn’t optimal. Imagine if testing automobiles was done that way. That would mean that every time you wanted to test a single screw or even a mirror in a car, you would have to assemble and then disassemble the whole car. Therefore, to make it more testable, the obvious solution is to break up your serverless function into several smaller ones.
If you’re struggling with figuring out how to do this, or if it’s your first time breaking apart any kind of service into smaller functions, you might not know where to start. Luckily, other people wanted to do it correctly and make their code more testable, too, which resulted in an architectural practice called Hexagonal Architecture, or the ports and adapters pattern.
Although the term “Hexagonal Architecture” sounds complex, it’s a simple design pattern where your service code pieces don’t talk directly to external resources. Instead, your service core talks to a layer of boundary interfaces. External services connect to those interfaces and adapt the concepts they need to those important for the application. For example, your createOrder
handler in a Hexagonal Architecture wouldn’t directly receive an API request; it would receive an OrderRequest
object in an application-specific format that contains the pizza
and deliveryAddress
objects describing the ordered pizza and delivery address. An adapter would be responsible for converting between the API request format and the createOrder
format. You can see a visual representation of this handler with the proposed Hexagonal Architecture in figure 11.6.
This architecture also means that your createOrder
function won’t call DynamoDB directly. Instead, it will talk to boundary interfaces that are specific for your needs. For example, you could define an OrderRepository
object that could be any object with the function put
. You would then define a separate DynamoOrderRepository
object that implements that particular interface and talks to DynamoDB. You would do the same with the Some Like It Hot Delivery API.
This architecture allows you to test the integration of API requests and DynamoDB with your code without worrying how your service interacts with DynamoDB or the delivery service. Even if DynamoDB completely changes its API or you change from DynamoDB to some other AWS database service, your handler’s core will not change, just the DynamoOrderRepository
object will. This improves testing of successful responses and internal error handling, and keeps your application code safe and consistent. Also, it shows what you need to mock in your integration tests.
To implement this architecture, you’ll need to break your createOrder
handler into several functions. You’ll show only the one with DynamoDB. You’ll need to pass the orderRepository
as an additional parameter into your createOrder
function. Instead of directly communicating with the AWS DynamoDB DocumentClient
, you’ll call the put
on orderRepository
. The next listing shows the applied orderRepository
changes.
Listing 11.14 Updating pizza order saving handler with orderRepository
function createOrder(request, orderRepository) { ①
// we have removed the code for initializing AWS DynamoDB, because that has
moved inside the orderRepository ②
let userAddress = request && request.body && request.body.address;
if (!userAddress) {
const userData = request && request.context && request.context.authorizer
&& request.context.authorizer.claims;
if (!userData)
throw new Error()
// console.log('User data', userData)
userAddress = JSON.parse(userData.address).formatted
}
// the previous code remains the same
.then(rawResponse => JSON.parse(rawResponse.body))
.then(response => orderRepository.createOrder({ ③
cognitoUsername: userAddress['cognito:username'],
orderId: response.deliveryId,
pizza: request.body.pizza,
address: userAddress,
orderStatus: 'pending'
})
).promise()
})
// the rest of the code remains the same
}
This updated listing demonstrates how the createOrder
handler has changed. Now, if you wanted to refactor or change your database service, you wouldn’t need to edit your createOrder
handler at all. Also, it’s much easier to mock orderRepository
compared with DynamoDB’s DocumentClient
. The only thing remaining is to set up the orderRepository
. You can create it as a separate module, because you might want to use it in the other handlers as well. The next listing demonstrates the orderRepository
setup.
Listing 11.15 Wiring and configuring the orderRepository
var AWS = require('aws-sdk') ①
module.exports = function orderRepository() { ②
var self = this
const tableName = 'pizza-orders', ③
docClient = new AWS.DynamoDB.DocumentClient({
region: process.env.AWS_DEFAULT_REGION
}) ④
self.createOrder = function (orderData) { ⑤
return docClient.put({ ⑥
TableName: tableName,
Item: {
cognitoUsername: orderData.cognitoUsername,
orderId: orderData.orderId,
pizza: orderData.pizza,
address: orderData.address,
orderStatus: orderData.orderStatus
}
})
}
}
Setting up boundary interfaces, such as the orderRepository
from this listing, helps you to separate the logic of interacting with the specifics of AWS DynamoDB from the logic of saving pizza orders. Now, you can try to implement the other boundary interfaces (for DeliveryRequest and the API request) on your own.
Writing testable serverless functions makes your code simpler, easier to read, and easier to debug, and also removes the potential risks from your services. Thinking about testing first, before developing, can help you to avoid potential problems while at the same time providing high-quality serverless applications.
We hope that this chapter has provided you with enough knowledge and resources to at least make a start on testing your serverless functions. Now it’s time for your exercises!
Automated tests are an important part of any application. Serverless applications are no different. We’ve prepared a small exercise for you, but you shouldn’t stop there. Instead, you should go further and write more tests, until testing your serverless applications becomes part of your normal workflow.
In Node.js applications, people often test API routes. You can do the same with Claudia API Builder. So, your next exercise is to test whether Claudia API Builder set up all the routes correctly. Here are a few tips on how to do that:
.apiConfig
method of Claudia API Builder to get the API configuration with the routes
array.If you need an additional challenge, you can update your Pizza API to follow the Hexagonal Architecture guidelines, and then you can test the rest of your Pizza API service. This additional challenge isn’t discussed in the next section, but you can take a look at the source code to see our solution.
To test the API routes, create a file called api.spec.js in the specs folder of your Pizza API project. Note that this file should not be in the handlers subfolder, because you’re not testing handlers.
In this file, require the main api.js file and use Jasmine’s describe
function to add a description, which can be simple—for example, “API
” or “API routes
.”
Then define the array of objects that contain paths and methods for those paths. Define paths without the leading slash (/
), because Claudia API Builder stores them that way.
The next step is to loop through the array of routes and invoke Jasmine’s it
function for each. You can test whether the current route exists in the underTest.apiConfig().routes
array and if its methods are the same methods you defined in the routes
array.
For the full api.spec.js file, see the next listing.
Listing 11.16 Testing API routes
'use strict'
const underTest = require('../api') ①
describe('API', () => {
[ ②
{
path: '',
methods: ['GET']
}, {
path: 'pizzas',
methods: ['GET']
}, {
path: 'orders',
methods: ['POST']
}, {
path: 'orders/{id}',
methods: ['PUT', 'DELETE']
}, {
path: 'delivery',
methods: ['POST']
}, {
path: 'upload-url',
methods: ['GET']
}
].forEach(route => {
it(`should setup /${route.path} route`, () => { ③
expect(Object.keys(underTest.apiConfig().routes[route.path])).toEqual(route.methods) ④
})
})
})
If you run the npm test
command again, the tests should all pass. If you want to run only tests for the API routes, you can run the npm t filter=“should setup”
command.