Automated Testing with Bespoken’s BST

Bespoken is a company with a focus on testing and monitoring voice applications. Among other things, they offer a handful of open-source tools for testing Alexa skills. Most notable among those tools is the BST CLI which enables YAML-based unit testing of Alexa skills.

BST comes in the form of a command line tool that you can use to setup an Alexa skill project for testing, run tests, and even measure test coverage to ensure that all corners of your skill’s code are covered by tests. It also features a unique “Virtual Alexa” under the covers that does a pretty good job of understanding your skill’s interaction model so that you can test your skill without even deploying it.

Before you can use BST, you will need to install its command line tool, the BST CLI.

Setting Up BST

Much like the ASK CLI, the BST CLI is installed using npm. Here’s how to install it:

 $ ​​npm​​ ​​install​​ ​​bespoken-tools​​ ​​--global

Since it is a command line tool and not a library, the --global flag will tell npm to install it as a global package, rather than as a dependency in the local project’s package.json. This also makes it handy for testing any skill you may write, not just the current skill.

The BST CLI is based on the Jest[14] JavaScript testing framework. As such, you get many of the same benefits as you would get using Jest on a non-skill project. In particular, we’ll see in a moment that the BST CLI offers test coverage so that we can know what parts of your code aren’t covered by a test.

Before we can write and run tests with the BST CLI, we need to initialize BST for our project. The quickest way of initializing BST is with the bst init command. Type bst init at the command line and from the root of the skill project:

 $ ​​bst​​ ​​init
 
 BST: v2.6.0 Node: v17.6.0
 Remember, you can always contact us at https://gitter.im/bespoken/bst.
 
 Welcome to the Bespoken CLI.
 We'll set up all you need for you to start testing your voice apps.
 Please tell us:
 ? What type of tests are you creating - unit, end-to-end: (Use arrow keys)
 ❯ unit
  end-to-end

The bst init command will then ask you a series of questions about your project and the type of tests you want to create. The questions you will need to answer are:

  • The type of tests—This can be either “unit” or “end-to-end”. BST supports both locally run unit tests as well as end-to-end testing of a deployed skill. For now, we’re going to focus on writing unit tests, so select “unit”.

  • The name of the project—Our project is named “starport-75”, so that’s a good answer for this question. Even so, the only place that this name is used by BST is in a sample test that it creates. We’ll likely change that test later anyway, so the name you provide here doesn’t matter much.

  • The voice platform—BST supports writing tests for both Alexa and Google Assistant. Since we’re developing an Alexa skill, choose “Alexa”.

  • Handler name—This is the relative path from the testing.json file to the handler implementation. Enter “lambda/index.js” and press enter.

  • The locale—For now, we’re focusing on U.S. English, so the default answer will do fine. We’ll add support for multiple languages in Chapter 8, Localizing Responses.

The full bst init session, including the answers for our project, is shown here:

 $ ​​bst​​ ​​init
 
 BST: v2.6.0 Node: v17.6.0
 Use bst speak to interact directly with Alexa.
 
 Welcome to the Bespoken CLI.
 We'll set up all you need for you to start testing your voice experiences.
 Please tell us:
 ? What type of tests are you creating - unit, end-to-end: unit
 ? Enter the name of your voice experience: starport-75
 ? Select the platform you are developing for Alexa
 ? Please provide the name of your handler file (or leave blank for index.js):
  lambda/index.js
 ? Enter the locale for your tests.
 If you are targeting multiple locales, please separate them by a comma: en-US
 
 That's it! We've created your voice app test files and you can find them under
 the "test" folder. To run them, simply type:
 bst test
 Learn more about testing for voice at https://read.bespoken.io

Great! Now our project has been configured for BST unit tests and there’s even a sample unit test already written for us. The BST configuration will be found in a file named testing.json at the root of the project and the tests will be in test/unit relative to the project root. But before we can run the test or write new tests, we’ll need to tweak the configuration a little to specify the location of the interaction model.

By default, the bst command assumes that the interaction model is located at models/en-US.json (relative to the project root). But that default path refers to the location of the interaction model in projects created by an older version of the ASK CLI. In projects created by the current ASK CLI, the interaction model is in skill-package/interactionModels/custom/en-US.json. Therefore, open the testing.json file and change it to look like this:

 { "handler": "lambda/index.js", "interactionModel": "skill-package/interactionModels/custom/en-US.json", "locales": "en-US" }

As you can see, the handler property is set to the relative path of the fulfillment code’s main Javascript file. The locales property is set to “en-US” so that all tests will execute as if the device were U.S. English by default.

If you’d like, you may also relocate testing.json so that it’s in the test/unit directory alongside the BST tests. If you do that, you’ll need to change the paths in testing.json to be relative to the new location:

 { "handler": "../../lambda/index.js", "interactionModel": "../../skill-package/interactionModels/custom/en-US.json", "locales": "en-US" }

Now that BST is set up, let’s see what it can do. Although we haven’t written any tests yet, bst init gave us a simple test to start with. Let’s run it and see how it fares.

Running BST Tests

To run BST tests, type bst test at the command line (from the project root):

 $ ​​bst​​ ​​test​​ ​​--jest.collectCoverage=false
 
 BST: v2.6.0 Node: v17.6.0
 Did you know? You can use the same YAML syntax for both your end-to-end
 and unit tests. Find out more at https://read.bespoken.io.
 
  FAIL test/unit/index.test.yml
  My first unit test suite (en-US)
  Launch and ask for help
  ✕ LaunchRequest (508ms)
  ✕ AMAZON.HelpIntent (1ms)
 
  ● My first unit test suite (en-US) › Launch and ask for help ›
  LaunchRequest
 
  Expected value at [prompt] to ==
 
  Welcome to starport-75
  Received:
  <speak>Welcome to Star Port 75 Travel. How can I help you?</speak>
  Timestamp:
  2020-01-03T08:29:44.444
 
  ● My first unit test suite (en-US) › Launch and ask for help ›
  AMAZON.HelpIntent
 
  Expected value at [prompt] to ==
  What can I help you with?
  Received:
  <speak>You can say hello to me! How can I help?</speak>
 
  11 | - LaunchRequest : "Welcome to starport-75"
  12 | - AMAZON.HelpIntent :
  >​​ ​​13​​ ​​|​​ ​​-​​ ​​prompt​​ ​​:​​ ​​"What can I help you with?"
  14 |
 
  at test/unit/index.test.yml:13:0
  Timestamp:
  2020-01-03T08:29:44.446
 
 Test Suites: 1 failed, 1 total
 Tests: 1 failed, 1 total
 Snapshots: 0 total
 Time: 1.867s
 Ran all test suites.

As shown here, bst test was run with --jest.collectCoverage=false as a parameter. BST collects test coverage metrics, which can be useful in determining how much and what parts of your skill’s fulfillment code is not covered by tests. At this point, however, we’re just getting started and the test coverage report will just be extra noise. Setting --jest.collectCoverage=false disables test coverage reporting. Once we’ve written a few more tests, we’ll leave that parameter off to check how much of our code is covered by tests.

What’s most notable in the results from bst init is that the test in test/unit/index.test.yml has failed. More specifically, it failed on two assertions:

  • When launching the skill, it was expecting the response to be “Welcome to starport-75”, but instead it received “Welcome to Star Port 75 Travel. How can I help you?”

  • When asking for help, it expected to receive “What can I help you with?” but instead it got “You can say hello to me! How can I help?”

If you take a look at test/unit/index.test.yml, it’s easy to see how those expectations are set:

 ---
 configuration:
  description: ​My first unit test suite
 ---
 - test : ​Launch and ask for help
 - LaunchRequest : ​"​​Welcome​ ​to​ ​starport-75"
 - ​AMAZON.HelpIntent​ :
  - prompt : ​"​​What​ ​can​ ​I​ ​help​ ​you​ ​with?"

This test specification has two parts, the configuration and a test case, separated by a triple dash (). The configuration is rather simple and only specifies a description of the test. The test case, however, deserves a bit more explanation.

The first line specifies the name of the test case. In this case, the name is “Launch and ask for help”. This is what will be displayed in the test results.

The next line instructs BST to send a launch request to the skill and to expect “Welcome to starport-75” as the response. Obviously, this is not how our skill responds to a launch request, so we should change it to expect “Welcome to Star Port 75 Travel. How can I help you?”

The next few lines instruct BST to send an intent request for the built-in AMAZON.HelpIntent. The line that starts with prompt asserts that the response speech from the intent should be “What can I help you with?” We should change this to “You can say hello to me! How can I help?” so that it matches what the skill’s implementation actually returns.

After making those tweaks to the test specification, here’s what it should look like:

 ---
 configuration:
  description: ​My first unit test suite
 ---
 - test : ​Launch and ask for help
 - LaunchRequest : ​"​​Welcome​ ​to​ ​Star​ ​Port​ ​75​ ​Travel.​ ​How​ ​can​ ​I​ ​help​ ​you?"
 - ​AMAZON.HelpIntent​ :
  - prompt : ​"​​You​ ​can​ ​say​ ​hello​ ​to​ ​me!​ ​How​ ​can​ ​I​ ​help?"

Let’s try running the test again to see if it fares better after these changes:

 $ ​​bst​​ ​​test​​ ​​--jest.collectCoverage=false
 
 BST: v2.6.0 Node: v17.6.0
 Use bst launch to mimic someone opening your skill.
 
  PASS test/unit/index.test.yml
  My first unit test suite (en-US)
  Launch and ask for help
  ✓ LaunchRequest (401ms)
  ✓ AMAZON.HelpIntent (1ms)
 
 Test Suites: 1 passed, 1 total
 Tests: 1 passed, 1 total
 Snapshots: 0 total
 Time: 1.799s
 Ran all test suites.

Fantastic! The test now passes! With that initial test settled, let’s write a few more tests to cover some of the other request handlers in our skill.

Writing Tests for BST

At this point the main request handler is HelloWorldIntentHandler, so we should write a test for it next. Following the example set in index.test.yml, we’ll create a new test specification in a file named hello.test.yml:

 ---
 configuration:
  description: ​Hello world intent tests
 
 ---
 - test: ​Hello World intent
 - HelloWorldIntent:
  - prompt: ​Have a stellar day!

Just like index.test.yml, the test specification in hello.test.yml is divided into two sections. The first section specifies some test-specific configuration for this particular test. In this case, it only specifies a description of the test specification. The other section is an actual test case. The test submits an intent request for the HelloWorldIntent, asserting that the response’s output speech is “Have a stellar day!”

The test specifies the intent to invoke by the intent name (for example, HelloWorldIntent). But we can also build a test around an utterance. For example, instead of submitting a request directly to the intent named HelloWorldIntent, you can build the test around what would happen if the user were to say “Hello”:

 ---
 - test: ​Hello World Utterance
 - ​"​​Hello"​:
  - prompt: ​Have a stellar day!

To see our new test specification in action, run the bst test command:

 $ ​​bst​​ ​​test​​ ​​--jest.collectCoverage=false
 
 BST: v2.6.0 Node: v17.6.0
 bst test lets you have a complete set of unit tests using a simple YAML
 format. Find out more at https://read.bespoken.io.
 
  PASS test/unit/index.test.yml
  My first unit test suite (en-US)
  Launch and ask for help
  ✓ LaunchRequest (76ms)
  ✓ AMAZON.HelpIntent (1ms)
 
  PASS test/unit/hello.test.yml
  Hello world intent tests (en-US)
  Hello World intent
  ✓ HelloWorldIntent (4ms)
  Hello World Utterance
  ✓ Hello (8ms)
 
 Test Suites: 2 passed, 2 total
 Tests: 3 passed, 3 total
 Snapshots: 0 total
 Time: 0.879s, estimated 2s
 Ran all test suites.

As you can see, bst test ran both test specifications, including both test cases in hello.test.yml. But sometimes you might want to focus attention on a single test specification. You can do this by passing the test specification filename as a parameter to bst test. For example, to only run the tests in hello.test.yml:

 $ ​​bst​​ ​​test​​ ​​hello.test.yml

In addition to HelloWorldIntent, our fulfillment includes several handlers for Amazon’s built-in intents—AMAZON.StopIntent, AMAZON.CancelIntent, and AMAZON.FallbackIntent—as well as a request handler for SessionEndedRequest. Even though all of these handlers are fairly simple, we should still write some tests for them. The following test specification in standard-handlers.test.yml should do fine:

 ---
 configuration:
  description: ​Tests for standard request handlers
 
 ---
 - test: ​Cancel request
 - ​AMAZON.CancelIntent​:
  - prompt: ​Goodbye!
 
 ---
 - test: ​Stop request
 - ​AMAZON.StopIntent​:
  - prompt: ​Goodbye!
 
 ---
 - test: ​Fallback Intent
 - ​AMAZON.FallbackIntent​:
  - prompt: ​Sorry, I don't know about that. Please try again.
 
 ---
 - test: ​Session ended request
 - SessionEndedRequest:

This test specification, unlike the others we’ve seen so far, has four test cases, each separated by a triple dash. The three intent handler tests simply assert the response’s output speech. As for the test for SessionEndedRequest, it doesn’t assert anything. That’s because the request handler for SessionEndedRequest doesn’t do much. But having this test will at least ensure that the handler is executed with no errors.

Now that we have tests covering both our HelloWorldIntent as well as tests to cover Amazon’s built-in intents and requests, let’s run them all and see how they fare:

 $ ​​bst​​ ​​test​​ ​​--jest.collectCoverage=false
 
 BST: v2.6.0 Node: v17.6.0
 Use bst utter to interact with your skill, and we will handle your
 intents for you.
 
  PASS test/unit/standard-handlers.test.yml
  Tests for standard request handlers (en-US)
  Cancel request
  ✓ AMAZON.CancelIntent (291ms)
  Stop request
  ✓ AMAZON.StopIntent (4ms)
  Fallback Intent
  ✓ AMAZON.FallbackIntent (1ms)
  Session ended request
  ✓ SessionEndedRequest (1ms)
 
  PASS test/unit/hello.test.yml
  Hello world intent tests (en-US)
  Hello World intent
  ✓ HelloWorldIntent (5ms)
  Hello World Utterance
  ✓ Hello (2ms)
 
  PASS test/unit/index.test.yml
  My first unit test suite (en-US)
  Launch and ask for help
  ✓ LaunchRequest (6ms)
  ✓ AMAZON.HelpIntent (2ms)
 
 Test Suites: 3 passed, 3 total
 Tests: 7 passed, 7 total
 Snapshots: 0 total
 Time: 1.203s
 Ran all test suites.

Awesome! All tests are passing! It would seem that there’s nothing left to test. But how can know for sure that we’re testing everything?

Measuring Test Coverage

If we run the tests again without the --jest.collectCoverage=false parameter, we’ll get a test coverage report that will give us an idea of how much of our skill’s code remains untested:

 $ bst test
 
 BST: v2.6.0 Node: v17.6.0
 
 ...
 
 ---------------|---------|----------|---------|---------|-------------------|
 File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s |
 ---------------|---------|----------|---------|---------|-------------------|
 All files | 7.66 | 7.14 | 31.25 | 7.78 | |
  lambda | 84 | 100 | 83.33 | 84 | |
  index.js | 84 | 100 | 83.33 | 84 | 14,16,44,46 |
  test/astf | 0 | 100 | 0 | 0 | |
  hello-test.js| 0 | 100 | 0 | 0 |... 12,21,22,32,33 |
  test/testflow | 0 | 0 | 0 | 0 | |
  testflow.js | 0 | 0 | 0 | 0 |... 97,601,607,615 |
 ---------------|---------|----------|---------|---------|-------------------|
 Test Suites: 2 passed, 2 total
 Tests: 6 passed, 6 total
 Snapshots: 0 total
 Time: 2.004s
 Ran all test suites.

Oh my! According to this test coverage report, only about 7% of our skill’s code is covered by tests. Even with the tests we’ve written, we’ve still got a long way to go.

But wait a minute. Several of the Javascript files listed in the test coverage report are for things that shouldn’t be tested, such as tests we wrote earlier in this chapter with TestFlow and Alexa Skill Testing Framework. And all of those have 0% coverage, which is definitely bringing down our overall coverage. We should exclude those files from test coverage.

To exclude files from test coverage, we need to set the jest.coveragePathIgnorePatterns property in our BST configuration. Edit the testing.json file in test/unit so that it looks like this:

 {
 "handler"​: ​"../../lambda/index.js"​,
 "interactionModel"​:
 "../../skill-package/interactionModels/custom/en-US.json"​,
 "locales"​: ​"en-US"​,
 "jest"​: {
 "coveragePathIgnorePatterns"​: [
 "<rootDir>/lambda/util.js"​,
 "<rootDir>/lambda/local-debugger.js"​,
 "<rootDir>/lambda/test.*"​,
 "<rootDir>/test/.*"
  ]
  }
 }

Here, we’re setting the jest.coveragePathIgnorePatterns property so that it ignores any Javascript files under the test and lambda/test directories, which should take care of the TestFlow and Alexa Skills Testing Framework files. It also ignores the util.js and local-debugger.js files. Since we’re not even using these right now, there’s no reason to include them in the test coverage report.

Now, if we run it again, the test coverage table should be much less daunting:

 $ ​​bst​​ ​​test
 
 BST: v2.6.0 Node: v17.6.0
 
 ...
 
 ----------|----------|----------|----------|----------|-------------------|
 File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line ​#s |
 ----------|----------|----------|----------|----------|-------------------|
 All files | 77.14 | 100 | 75 | 77.14 | |
  index.js | 77.14 | 100 | 75 | 77.14 |... 36,139,140,142 |
 ----------|----------|----------|----------|----------|-------------------|
 Test Suites: 3 passed, 3 total
 Tests: 7 passed, 7 total
 Snapshots: 0 total
 Time: 1.874s, estimated 2s
 Ran all test suites.

That’s a lot better, but we’re still not quite at 100%. There seems to be some untested code remaining in index.js. If you take a quick look at index.js you’ll find that we still have two handlers that we’ve not written tests for: ErrorHandler and IntentReflectorHandler.

Unfortunately, ErrorHandler is difficult to test properly. In order for the error handler to be triggered, we’d need to simulate an error condition in one of the other request handlers. All of our other request handlers are too simple to cause an error, so the only way to test ErrorHandler at this point would be to fabricate a failure condition.

As for IntentReflectorHandler, it only exists for purposes of debugging the interaction model and we’ll eventually remove it before publishing our skill. So it doesn’t make much sense to write a test for it.

Fortunately, we can exclude those two handlers from the test by adding a simple comment just before each of them:

 /* istanbul ignore next */
 const​ IntentReflectorHandler = {
  ...
 };
 /* istanbul ignore next */
 const​ ErrorHandler = {
  ...
 };

Where BST is based on Jest, Jest uses another library named istanbul under the covers to track code coverage. By placing a comment with the phrase istanbul ignore next just before a code block in Javascript, that code block will be excluded from test coverage. In this case, the comment is placed just before the IntentReflectorHandler, so that entire handler will be excluded from test coverage.

Now that those two handlers are excluded from test coverage, let’s try running the tests one more time to see how our coverage has improved:

 $ ​​bst​​ ​​test
 
 BST: v2.6.0 Node: v17.6.0
 
 ...
 
 ----------|----------|----------|----------|----------|-------------------|
 File | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line ​#s |
 ----------|----------|----------|----------|----------|-------------------|
 All files | 100 | 100 | 100 | 100 | |
  index.js | 100 | 100 | 100 | 100 | |
 ----------|----------|----------|----------|----------|-------------------|
 Test Suites: 3 passed, 3 total
 Tests: 7 passed, 7 total
 Snapshots: 0 total
 Time: 1.877s, estimated 2s
 Ran all test suites.

Achievement unlocked! We now have 100% test coverage. As we continue to develop our skill, we can know that it is meeting our expectations if we maintain a high level of test coverage.

The text-based test results and coverage metrics are useful feedback as we run our tests. But you may want a more polished test report produced that can be published on a project dashboard or sent in a status email. Don’t worry, BST has you covered. Let’s take a look at BST’s HTML-based test reports.

Viewing Test Results in a Web Browser

As a side-effect of running tests through bst test, you also get a few nice HTML-based test reports. One of those test reports, in test_output/report/index.inline.html, shows the test results in the following screenshot:

images/test/jest-stare.png

At the top of the test results report, you’ll see a link named “Coverage”. Clicking this link will open the report in test_output/coverage/lcov-report/index.html, a test coverage report, shown in this screenshot:

images/test/lcov-home.png

This page shows a summary of test coverage for each JavaScript file in the project, but if you click on the filename, you’ll be shown the source for that file, with uncovered code highlighted in red:

images/test/lcov-code-1.png

Detailed test results and test coverage reports are one of the best features of using the BST CLI for testing. And, just like the Alexa Skill Testing Framework, you do not need to deploy your skill before you test it with bst test.

Automated testing, with either the Alexa Skill Test Framework or BST CLI, is a great way to test a skill frequently, getting quick feedback that lets you know if you’ve made any changes that break how your skill is expected to work. And, as we’ll see next, with a little bit of scripting, you can even leverage those automated tests to prevent a broken skill from being deployed.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset