Semi-Automated Testing

Semi-automated testing of an Alexa skill involves automatically sending a request to your skill, but still manually verifying the response. This style of testing is most useful when debugging, especially if the debugging session requires repeating the same flow of intents multiple times.

We’re going to look at three options for semi-automated testing of Alexa skills, using the ASK CLI and TestFlow. First up, let’s see how to use the ASK CLI to test a skill.

Testing with the ASK CLI

We’ve already used the ASK CLI to bootstrap our Alexa skill project and to deploy it. But the ASK CLI has many more tricks up its sleeves, including the dialog subcommand which opens a shell within which you can interact with the skill by typing utterances and reading the responses.

To use the ask dialog command in interactive mode, you’ll need specify the --locale parameter to provide a locale that you want to use when interacting with the skill. For example, here’s how to start a session with ask dialog in the U.S. English locale:

 $ ​​ask​​ ​​dialog​​ ​​--locale​​ ​​en-US

Once the dialog session starts, you will be shown a User > prompt at which you can type in what you want to say to Alexa. Alexa will reply in lines that begin with Alexa >, followed by a new User > prompt for you to say something else. For instance, here’s a simple dialog session interacting with the Star Port 75 skill:

 User > open star port seventy five
 Alexa > Welcome to Star Port 75 Travel. How can I help you?
 User > hello
 Alexa > Have a stellar day!
 User > .quit

Here, we are launching the skill by typing “open” followed by the skill’s invocation name. Alexa replies with the launch request handler’s greeting. Then we say “hello” and Alexa replies by saying, “Have a stellar day!” Finally, we type in the special .quit command to end the dialog session.

Even though testing with ask dialog involves typing queries and reading Alexa’s responses, it’s a very similar experience to speaking to an actual Alexa-enabled device. But what if, in the course of developing a skill, you find yourself needing to run through the same dialog session many times? Maybe you try something, make a few changes to your skill’s fulfillment code, and then want to try the same dialog again?

As it turns out, ask dialog supports interaction record and playback. To record an ask dialog session, start it up as normal and type whatever queries you want to interact with Alexa. When you’re done, use the special .record command. You’ll be prompted to supply a filename for the recorded interaction. Here’s the same ask dialog session as before, using the .record command to save the interaction for later replay:

 $ ​​ask​​ ​​dialog​​ ​​--locale​​ ​​en-US
 User > open star port seventy five
 Alexa > Welcome to Star Port 75 Travel. How can I help you?
 User > hello
 Alexa > Have a stellar day!
 User > .record test/askdialog/launch-and-hello.json
 Created replay file at test/askdialog/launch-and-hello.json
 User > .quit

This will create a JSON file named launch-and-hello.json, which looks like this:

 {
 "skillId"​: ​"amzn1.ask.skill.8f0c022e-65ee-4c73-a889-c10ad5bc74de"​,
 "locale"​: ​"en-US"​,
 "type"​: ​"text"​,
 "userInput"​: [
 "open star port seventy five"​,
 "hello"
  ]
 }

The recorded script captures the skill’s ID and locale, as well as an array containing the text provided as input by the user. With this script in hand, you can now replay it as many times as you want by running ask dialog with the --replay parameter:

 $ ​​ask​​ ​​dialog​​ ​​--replay​​ ​​test/askdialog/launch-and-hello.json
 [Info]: Replaying file test/askdialog/launch-and-hello.json.
 User > open star port seventy five
 Alexa > Welcome to Star Port 75 Travel. How can I help you?
 User > hello
 Alexa > Have a stellar day!
 User >

Because the script provides the locale, there’s no need to specify it when starting ask dialog. And, rather than have you type your query at the User > prompt, the --replay parameter will do that for you.

One thing that the replay script does not contain is the .quit command to exit the dialog session. Therefore, once the replay script has completed, you can either continue poking at your skill with more queries or type .quit to exit the session. Optionally, you could have created the script with --append-quit to have the .quit command included in the script:

 User > .record test/askdialog/launch-and-hello.json --append-quit
 Created replay file at launch-and-hello.json (appended `.quit` to list of
 utterances).

You’ll find ask dialog useful for ad hoc testing sessions with your skill. And with the record/replay feature, you can have it automatically play through a predefined testing session. It’s still up to you, however, to visually inspect the responses coming back from Alexa to ensure they are what you expect from your skill.

One of the drawbacks of all of the testing techniques shown so far is that they require you to deploy your skill before you can test it. This is limiting in that you won’t be able to test your skills if you aren’t connected to the internet (such as if you are on a flight). And even if you are online, at the very least the feedback cycle is longer because you’ll have to wait on your skill to deploy before you can test it.

Thankfully, not all testing tools require that a skill be deployed. Let’s have a look at TestFlow, a semi-automated testing tool that supports repeatedly running conversations with a skill, but without requiring that the skill be deployed.

Testing with TestFlow

TestFlow is a testing tool which simply fires intents at your skill’s fulfillment code and reports the text from the response. The primary benefit of using TestFlow is that because it doesn’t attempt to understand utterances and instead directly executes request handlers in fulfillment code, you do not need to deploy your skill or even be online when running tests.

To get started with TestFlow, you’ll need to download it into the skill project. It can go in any directory, but in the interest of good project organization, let’s put it in the test/testflow directory (relative to the project’s root directory). You’ll need to first create the directory like this:

 $ ​​mkdir​​ ​​-p​​ ​​test/testflow

TestFlow is fully contained in a single JavaScript file named testflow.js, which you can pull from the project’s GitHub repository.[13] Using curl, you can fetch testflow.js like this:

 $ ​​curl​​ ​​https://raw.githubusercontent.com/robm26/testflow/master/testflow.js​​ ​​
  ​​-o​​ ​​test/testflow/testflow.js

After downloading the script, you’ll need to edit it, changing the SourceCodeFile constant near the top to correctly reference the location of the skill’s fulfillment code. Assuming that you have downloaded testflow.js into the test/testflow directory of the Star Port 75 skill project, the SourceCodeFile constant should be set like this:

 const​ SourceCodeFile = ​'../../lambda/index.js'​;

Next, you’ll need a directory where your TestFlow tests will live. In TestFlow, the test cases are known as dialogs, so create a dialogs directory at the root of the skill project:

 $ mkdir test/testflow/dialogs

By default, TestFlow will run a dialog definition named default.txt from the dialogs directory. So, let’s create our first dialog definition by creating the default.txt file in the dialogs directory, asking TestFlow to launch the skill and say hello:

 LaunchRequest
 HelloWorldIntent

The dialog definition is nothing more than a list of intent names, with one intent name on each line. In the interest of simplicity, TestFlow doesn’t consider the skill’s interaction model and therefore it won’t execute the test based on utterances. Instead, it will invoke the canHandle() function on each of the skill’s request handlers to find a handler that can handle the given intent. When it finds one, it then invokes the handle() function on the matching handler to produce results.

To see TestFlow in action, run the testflow.js file from the test/testflow directory using node:

 $ ​​cd​​ ​​test/testflow
 $ ​​node​​ ​​testflow.js
 Running testflow on ../../lambda/index.js using dialog sequence file
 ./dialogs/default.txt
 
 1 LaunchRequest
  Welcome to Star Port 75 Travel. How can I help you?
 ----------------------------------------------------------------
 
 2 HelloWorldIntent
  Have a stellar day!
 ================================================================

If you get an error that says something like “Cannot find module ask-sdk-core”, then you’ll need to install the Node dependencies. Temporarily switch over to the lambda directory and install the dependencies before running TestFlow again:

 $ ​​cd​​ ​​../../lambda
 $ ​​npm​​ ​​install
 $ ​​cd​​ ​​../test/testflow
 $ ​​node​​ ​​testflow.js

As TestFlow runs, it steps through each of the intents listed in default.txt and displays the response text from the matching handler.

TestFlow would be of very limited use if it could only run through the intents listed in default.txt. Typically, your skill will have several potential dialog flows and you’ll want to define a dialog for each of them. Thus while default.txt is just the default dialog, you can create additional dialog definitions, naming them appropriately.

For example, suppose we wanted to test what would happen if the user asked for help after launching the request. The following dialog definition launches the skill, asks for help, then triggers the intent named HelloWorldIntent:

 LaunchRequest
 AMAZON.HelpIntent
 HelloWorldIntent

To run this dialog, you just need to specify the dialog definition filename as a parameter when running testflow.js:

 $ ​​node​​ ​​testflow.js​​ ​​with_help.txt
 Running testflow on ../../lambda/index.js using dialog sequence file
 ./dialogs/with_help.txt
 
 1 LaunchRequest
  Welcome to Star Port 75 Travel. How can I help you?
 ----------------------------------------------------------------
 
 2 AMAZON.HelpIntent
  You can say hello to me! How can I help?
 ----------------------------------------------------------------
 
 3 HelloWorldIntent
  Have a stellar day!
 ================================================================

As with ask dialog, you’re still responsible for visually inspecting the results of the test and determining if the skill is behaving as you wish. But unlike ask dialog, you do not need to have deployed your skill before you can test it.

In either case, however, the need to visually inspect the response lends itself to human error, including accidentally overlooking bugs in the skill response. Moreover, if the skill is built and deployed through a continuous integration/continuous deployment (CI/CD) pipeline, there’s no way to verify the results before deployment. Tools like ask dialog and TestFlow are semi-automatic in that they are able to automatically run through a predefined dialog flow, but are unable to automatically verify results.

Proper and complete testing of a skill requires that the skill be exercised through various dialog scenarios and that the responses be automatically verified. Let’s have a look at a couple of fully automated options for testing Alexa skills.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset