Creating Your First Alexa Skill

Through this book, we’re going to build an Alexa skill for Star Port 75 Travel, a fictional travel agency specializing in interstellar travel destinations. Using the skill we create, adventurous travelers will be able to plan an out-of-this-world vacation. As the book progresses, we’ll add features to capture the user’s travel plans, book travel with external systems, support multiple languages and locales, remind the user of their upcoming trips, and much more. We’ll start simple in this chapter, though, creating a voice application that simply welcomes the user to Star Port 75 Travel.

To get started, we’re going to use the ASK CLI to bootstrap the project. Among the many commands that the ASK CLI offers is the new command, which creates a new skill project, including the project’s directory structure, the skill manifest, interaction model, and some elementary code on which to add custom behavior.

Bootstrapping any Alexa skill project starts with typing ask new at the command line and answering a few questions:

 $ ​​ask​​ ​​new
 Please follow the wizard to start your Alexa skill project ->
 ? Choose the programming language you will use to code your skill:
  (Use arrow keys)
 ❯ NodeJS
  Python
  Java

The first question asks which language you want to develop with. The ASK SDK is available for NodeJS, Python, and Java. You’re welcome to choose any language you like for your skills, but there are so many more resources and tools available when working with NodeJS, so that’s the option we’ll use for the Star Port 75 skill.

After selecting the language, you’ll be asked to select the deployment option for hosting your skill:

 $ ​​ask​​ ​​new
 Please follow the wizard to start your Alexa skill project ->
 ? Choose the programming language you will use to code your skill: NodeJS
 ? Choose a method to host your skill's backend resources:
 ❯ Alexa-hosted skills
   Host your skill code by Alexa (free).
   AWS with CloudFormation
   Host your skill code with AWS services and provision with AWS
  CloudFormation (requires AWS account)
   AWS Lambda
   Host your skill code on AWS Lambda (requires AWS account).
   ──────────────
   self-hosted and manage your own hosting

Here you have four options: Alexa-hosted, AWS with CloudFormation, AWS Lambda, and self-hosting. We’re going to choose "Alexa-hosted skills". Alexa-hosted skills is a free option that is suitable for many Alexa skills, including the one we’re going to build. By choosing Alexa-hosted skills, Alexa will provision AWS Lambda endpoints in all three Alexa-service regions, an Amazon S3 bucket for media resources, an Amazon DynamoDB database that your skill can use to persist data, and an AWS CodeCommit Git repository for your Alexa project. Because all of that is set up for you, Alexa-hosted skills are by far the easiest option for getting started.

Note that if you were to select AWS Lambda or AWS with CloudFormation hosting here instead of Alexa-hosted skills, you will need to have linked your AWS account at the time you ran ask configure. You will also be asked a slightly different set of questions.

After selecting to deploy with Alexa-hosted skills, you’ll be asked to specify the default region.

 $ ​​ask​​ ​​new
 Please follow the wizard to start your Alexa skill project ->
 ? Choose the programming language you will use to code your skill: NodeJS
 ? Choose a method to host your skill's backend resources: Alexa-hosted skills
   Host your skill code by Alexa (free).
 ? Choose the default region for your skill:  (Use arrow keys)
 ❯ us-east-1
   us-west-2
   eu-west-1

Although Alexa-hosted skills will be deployed in all three regions supported for Alexa skills, you still must designate one as the default region. Pick the region closest to your users. Here, we’ll choose “us-east-1”.

After selecting the default hosting region, you’ll be asked for the name of the skill:

 $ ​​ask​​ ​​new
 Please follow the wizard to start your Alexa skill project ->
 ? Choose the programming language you will use to code your skill:  NodeJS
 ? Choose a method to host your skill's backend resources:  Alexa-hosted skills
   Host your skill code by Alexa (free).
 ? Choose the default region for your skill:  us-east-1
 ? Please type in your skill name:  starport-75

By default, the skill will be named “Hello World Skill”. But we’ll want to change that to “starport-75” for our skill.

Finally, you’ll be asked to specify a directory name for the project to be created in. This can be different than the skill’s name, but for our purposes, “starport-75” is a fine choice:

 $ ​​ask​​ ​​new
 Please follow the wizard to start your Alexa skill project ->
 ? Choose the programming language you will use to code your skill:  NodeJS
 ? Choose a method to host your skill's backend resources:  Alexa-hosted skills
   Host your skill code by Alexa (free).
 ? Choose the default region for your skill:  us-east-1
 ? Please type in your skill name:  starport-75
 ? Please type in your folder name for the skill project (alphanumeric): 
  starport-75

After giving the project directory name, the project will be created. You can change into that directory and have a look around.

Now that our project has been initialized, let’s take a quick tour of the project structure.

Exploring the Project

At a high level the directory structure of the newly created Alexa skill project will look like the following results from running a tree command:

 $ ​​tree
 .
 ├── ask-resources.json
 ├── lambda
 │   ├── index.js
 │   ├── local-debugger.js
 │   ├── package.json
 │   └── util.js
 └── skill-package
  ├── interactionModels
  │   └── custom
  │   └── en-US.json
  └── skill.json

Of all of the files in the project, there are three files that are most significant:

  • skill-package/interactionModels/custom/en-US.json: This file defines the skill’s interaction model for U.S. English. Later, in Chapter 8, Localizing Responses we’ll expand the interaction model to other languages and locales.

  • lambda/index.js: This is the source code for the fulfillment implementation. It contains JavaScript code for handling Alexa requests.

  • skill-package/skill.json: This is the skill’s deployment manifest, which describes some essential information about the skill and is used when deploying the skill.

As we continue to develop our skill, we’ll touch these three files the most. We’ll definitely edit en-US.json, and index.js in this chapter; we’ll tweak skill.json several times throughout this book to configure additional capabilities in our skill.

There are also a handful of files and directories that are important and helpful in building and deploying a skill:

  • lambda/package.json—This is a typical Node package definition specifying the modules that are required to support the skill. This includes, among other things, the ASK SDK. As you continue to evolve your skill, you might find yourself using the npm command line tool to add more modules to the project.

  • lambda/local-debugger.js—This script enables you to test and debug your Alexa skill with locally running fulfillment code. It is deprecated in favor of a much simpler way to run skills locally (which is covered in Appendix 1, Running and Debugging Skill Code Locally). You may delete it from the project or just ignore it.

  • lambda/util.js—This is a utility script that simplifies resolving URLs to images, sounds, and other resources stored in Amazon’s S3 service. We won’t need this script initially, so we’ll ignore it for now.

  • ask-resources.json—This file describes where the skill’s resources are placed in the project.

Even though these files are important, it’s rare that you’ll change them often or at all.

Now that we’ve taken the nickel tour of the “Hello World” template project, let’s dive even deeper into the code, starting with a look at the interaction model defined in skill-package/interactionModels/custom/en-US.json.

Describing the Interaction Model

Each skill defines its own interaction model, essentially defining how a user is expected to interact with the skill. While the interaction model can be created from scratch, the ASK CLI will give us a simple interaction model for U.S. English in skill-package/interactionModels/custom/en-US.json to get started with. For any brand new project based on the “Hello World” template, the interaction model will look like this:

 { "interactionModel": { "languageModel": { "invocationName": "change me", "intents": [ { "name": "AMAZON.CancelIntent", "samples": [] }, { "name": "AMAZON.HelpIntent", "samples": [] }, { "name": "AMAZON.StopIntent", "samples": [] }, { "name": "HelloWorldIntent", "slots": [], "samples": [ "hello", "how are you", "say hi world", "say hi", "hi", "say hello world", "say hello" ] }, { "name": "AMAZON.NavigateHomeIntent", "samples": [] } ], "types": [] } }, "version": "1" }

As mentioned before, the interaction model describes two main things: the skill’s invocation name and the mappings of utterances to the intents supported by the skill. As shown here, the invocation name is “change me”. That means that if we were to deploy this skill, a user would be able to open it by saying, “open change me.” But since that is a horrible invocation name, we’ll definitely want to change it to give it a name more befitting its purpose. Since we’re developing a skill for Star Port 75 Travel, we should change it to “star port seventy five” like this:

 {
 "interactionModel"​: {
 "languageModel"​: {
 "invocationName"​: ​"star port seventy five"​,
 ...
  }
  }
 }

With the invocation name set this way, we’ll be able to launch the skill by saying, “Alexa, open star port seventy five.”

Note that the invocation name is all lowercase and the words “seventy five” are spelled out rather than using a numerical representation. These are, in fact, requirements for the invocation name, enforced by the platform. There are several rules around invocation names,[8] but succinctly it must be two or more words, start with a letter and can only contain lower case letters, spaces, apostrophes, and periods. If we were to set the invocation name to “Star Port 75 Travel”, it would be rejected when we get around to deploying it later because the digits “75” are not allowed.

In addition to the invocation name, the interaction model also defines intents supported by the skill along with sample utterances for each of those intents. The invocation model in skill-package/interactionModels/custom/en-US.json provided by the “Hello World” template defines five intents, four of which are Amazon built-in intents and one which is a domain-specific intent defined by the skill itself.

HelloWorldIntent is the one intent that is specific to this skill. In a little while, we’ll see code that handles requests for this intent. But for now, notice that it has several sample utterances. Ultimately, we’re developing a travel planning skill and not a “hello world” skill. But for now, just to get started, we’ll leave this intent in place.

The remaining intents are Amazon built-in intents to cover situations common to most skills. These intents do not list any sample utterances in their samples property because they are already pre-associated with utterances appropriate to their purpose by the platform. For example, the intent named AMAZON.HelpIntent is pre-associated with the “help” utterance. You do not need to explicitly list “help” or any other sample utterances, but you may list additional utterances if you want the help intent to respond to a non-default utterance. For example, the following snippet from skill-package/interactionModels/custom/en-US.json will have Alexa respond with help information if the user says “huh”:

 {
 "name"​: ​"AMAZON.HelpIntent"​,
 "samples"​: [
 "huh"
  ]
 },

The AMAZON.CancelIntent and AMAZON.StopIntent intents match when the user says “cancel” and “stop,” but as with any intent, you are welcome to additional utterances to their sample list. They are often synonymous with each other. For our skill, they both are used to exit the skill if the user says “cancel” or “stop.” However, in some skills, the words “cancel” and “stop” might have different meanings. For example, “cancel” may have the semantics of canceling an order in some skills, but not exiting the skill. Because of this, they are expressed as two separate intents so that you can associate different behavior with each.

It’s important to realize that because this file is named en-US.json, it only describes the interaction model for English speaking users in the United States locale. To support other languages, we can create similar files in the models directory whose names are based on the language and locale we want to support. For now, we’ll start with U.S. English, then internationalize our skill later in Chapter 8, Localizing Responses.

Throughout this book, we’ll modify the interaction model, adding new custom intents and seeing other kinds of built-in intents. But for now, let’s see how to write fulfillment code that handles requests from Alexa.

Handling Requests

While the interaction model describes the interaction between a user and your skill, the fulfillment endpoint defines how the skill should respond to requests. As mentioned earlier, the fulfillment endpoint is typically deployed as a function on AWS Lambda that handles speech requests. For skills created with the ASK CLI for the “Hello World” template, a starting point fulfillment implementation is given in the lambda/index.js file.

If you open that file in your favorite text editor, you’ll see a line at the top that looks like this:

 const​ Alexa = require(​'ask-sdk-core'​);

This line imports the ASK SDK module and assigns it to a constant named Alexa. The Alexa constant will be referenced throughout the fulfillment implementation code to access SDK functionality.

One place you’ll find the Alexa constant in use is at the end of the lambda/index.js file where it is used to create a skill builder object through which request handlers will be registered:

 exports.handler = Alexa.SkillBuilders.custom()
  .addRequestHandlers(
  LaunchRequestHandler,
  HelloWorldIntentHandler,
  HelpIntentHandler,
  CancelAndStopIntentHandler,
  SessionEndedRequestHandler,
  IntentReflectorHandler,
  )
  .addErrorHandlers(
  ErrorHandler,
  )
  .lambda();

As you can see, we’re currently using the skill builder to register seven request handlers in the call to addRequestHandlers(). This includes handlers for Amazon’s built-in intents, handlers for the skill lifecycle (LaunchRequestHandler and SessionEndedRequestHandler), and the handler for the intent named HelloWorldIntent.

You’ll notice that a special error handler is registered in a call to addErrorHandlers() to handle any errors that may occur while processing a request.

Finally, the last thing we do with the skill builder is call lambda(), which builds a Lambda function that acts as a front-controller for all requests, dispatching them to one of the registered request handlers.

The request handlers, themselves, are objects that expose two functions, canHandle() and handle(). For example, consider the following request handler that handles the intent named HelloWorldIntent:

 const​ HelloWorldIntentHandler = {
  canHandle(handlerInput) {
 return​ Alexa.getRequestType(handlerInput.requestEnvelope)
  === ​'IntentRequest'
  && Alexa.getIntentName(handlerInput.requestEnvelope)
  === ​'HelloWorldIntent'​;
  },
  handle(handlerInput) {
 const​ speakOutput = ​'Hello World!'​;
 return​ handlerInput.responseBuilder
  .speak(speakOutput)
  .getResponse();
  }
 };

The canHandle() function examines the given handler input, which includes information about the request being handled, and decides whether or not the handler is able to handle the request. If it can, it should return true; otherwise it returns false indicating that this particular handler isn’t the one for handling the request. In this particular example, the first thing that canHandle() checks is that the request is an intent request, and not some other type of request such as a launch request or session-end request. Then it checks that the intent’s name is HelloWorldIntent. If so, then this handler is the one for the job.

Assuming that canHandle() returns true, the handle() method will be called next. Since, at this point, the skill is little more than a garden-variety Hello World example, the handle() method is fairly basic as far as handlers go. After assigning the legendary greeting to a constant, it references a response builder from the given handlerInput to build a response. By calling the speak() function, we are asking Alexa to speak the words “Hello World!” through the Alexa device’s speaker.

Even though at this point, the Star Port 75 skill is nothing more than a “hello world” skill, let’s make a few small customizations so that Alexa will give a greeting fitting to the interstellar travel business:

 const​ HelloWorldIntentHandler = {
  canHandle(handlerInput) {
 return​ Alexa.getRequestType(handlerInput.requestEnvelope)
  === ​'IntentRequest'
  && Alexa.getIntentName(handlerInput.requestEnvelope)
  === ​'HelloWorldIntent'​;
  },
  handle(handlerInput) {
»const​ speakOutput = ​'Have a stellar day!'​;
 return​ handlerInput.responseBuilder
  .speak(speakOutput)
  .getResponse();
  }
 };

With this change, Alexa will now say, “Have a stellar day!” instead of the trite “Hello world” greeting.

Taking inventory of the rest of the lambda/index.js file, you’ll notice that in addition to the HelloWorldIntentHandler, there are five other request handlers. Where HelloWorldIntentHandler exists to handle the skill-specific intent named HelloWorldIntent, the other handlers are there to handle a few of Amazon’s built-in intents and requests:

  • LaunchRequestHandler—Handles a launch request when the skill is first launched

  • HelpIntentHandler—Handles the built-in AMAZON.HelpIntent intent whenever a user asks for help

  • CancelAndStopIntentHandler—Handles the built-in AMAZON.CancelIntent and AMAZON.StopIntent intents. These will be sent if the user says “Cancel” or “Stop” to leave the skill.

  • SessionEndedRequestHandler—Handles a request to end the session

  • ErrorHandler—Handles any errors that may occur. This does not necessarily mean that Alexa doesn’t understand what the user said, but it might mean that some error is thrown from one of the other handlers while handling a different request.

  • IntentReflectorHandler—An intent handler used for testing and debugging. It will handle requests for any intents that aren’t handled by other intent handlers and echo the otherwise unhandled intent’s name. You may choose to keep it around if you want, but you should probably remove it before publishing your skill.

The handler named LaunchRequestHandler, for instance, will be invoked upon a request of type LaunchRequest. This is the request that is sent to a skill when the skill first opens—in this case, when the user says, “Alexa, open star port seventy five.” It looks like this:

 const​ LaunchRequestHandler = {
  canHandle(handlerInput) {
 return​ Alexa.getRequestType(handlerInput.requestEnvelope)
  === ​'LaunchRequest'​;
  },
  handle(handlerInput) {
 const​ speakOutput =
 'Welcome, you can say Hello or Help. Which would you like to try?'​;
 return​ handlerInput.responseBuilder
  .speak(speakOutput)
  .reprompt(speakOutput)
  .getResponse();
  }
 };

As you can see, LaunchRequestHandler isn’t much different from HelloWorldIntentHandler. It creates a response passing a welcome message to speak(). But it also passes the message to reprompt() to repeat the message after a moment if the user doesn’t say anything.

The out-of-the-box welcome message is fine for a generic “hello world” skill, but it isn’t good enough for Star Port 75 Travel. Let’s tweak it to give a more befitting welcome message:

 const​ LaunchRequestHandler = {
  canHandle(handlerInput) {
 return​ Alexa.getRequestType(handlerInput.requestEnvelope)
  === ​'LaunchRequest'​;
  },
  handle(handlerInput) {
»const​ speakOutput =
»'Welcome to Star Port 75 Travel. How can I help you?'​;
 return​ handlerInput.responseBuilder
  .speak(speakOutput)
  .reprompt(speakOutput)
  .getResponse();
  }
 };

The other request handlers and messages are fine as-is, but feel free to tweak them if you’d like.

We’re just getting started with the Star Port 75 skill, but the two request handlers and messages we’ve touched should serve as a fine introduction to writing custom Alexa skills. There’s a lot more we’ll add to the skill as we progress through the book. Before we deploy what we have created and try it out, let’s add a couple of helpful components to the skill, starting with a catch-all intent for utterances that don’t match any other intents.

Adding a Fallback Handler

Even though the sample utterances defined for the HelloWorldIntent serve as a guide to Alexa’s NLP service to match utterances to the intent, it’s not necessary for the user to speak those precise phrases to trigger the intent. Other utterances that the NLP considers reasonably close to the sample utterances will also match. In fact, without any further changes, almost any utterance that doesn’t match one of Alexa’s built-in intents will match the HelloWorldIntent.

For example, if the user were to say “watermelon,” that might be close enough to trigger the HelloWorldIntent. Actually, it’s not that “watermelon” is all that close to “hello,” but rather it’s not any closer to any other intent.

In cases like that, it’s usually better for Alexa to gracefully respond that she didn’t understand what the user said than to reply with a nonsensical answer. For that, Amazon provides the built-in AMAZON.FallbackIntent. AMAZON.FallbackIntent is a catch-all intent that is triggered when the user’s utterance doesn’t match any other intent’s sample utterances.

To take advantage of AMAZON.FallbackIntent, we must declare it in the interaction model just like any other intent by adding the following JSON to skill-package/interactionModels/custom/en-US.json:

 {
 "name"​: ​"AMAZON.FallbackIntent"​,
 "samples"​: []
 },

It’s not necessary to specify sample utterances for AMAZON.FallbackIntent, although if you find that certain utterances are incorrectly matching to other intents, you can list them under AMAZON.FallbackIntent to ensure that they’re directed there.

As with any other intent, we must also create an intent handler that will respond to the intent:

 const​ FallbackIntentHandler = {
  canHandle(handlerInput) {
 return​ Alexa.getRequestType(handlerInput.requestEnvelope)
  === ​'IntentRequest'
  && Alexa.getIntentName(handlerInput.requestEnvelope)
  === ​'AMAZON.FallbackIntent'​;
  },
  handle(handlerInput) {
 const​ speakOutput =
 'Sorry, I don​​'​​t know about that. Please try again.'​;
 
 return​ handlerInput.responseBuilder
  .speak(speakOutput)
  .reprompt(speakOutput)
  .getResponse();
  }
 };

Finally, be sure to register the intent handler with the skill builder:

 exports.handler = Alexa.SkillBuilders.custom()
  .addRequestHandlers(
  LaunchRequestHandler,
  HelloWorldIntentHandler,
  HelpIntentHandler,
  CancelAndStopIntentHandler,
» FallbackIntentHandler,
  SessionEndedRequestHandler,
  IntentReflectorHandler,
  )
 
  ...
 
  .lambda();

Because AMAZON.FallbackIntent is just another intent, like any other intent, FallbackIntentHandler can be registered among the other handlers in any order. It’s only important that it be listed before IntentReflectorHandler, since that handler doesn’t inspect the intent name and will therefore match any intent if listed higher.

Now, if a user were to say “watermelon,” “kangaroo,” “zip-a-dee-doo-dah,” or anything else that doesn’t match any other intent, it will be directed to AMAZON.FallbackIntent and handled by the FallbackIntentHandler.

If during testing, or after the skill has been published, you find that some utterances that you’d expect to be directed as AMAZON.FallbackIntent are being directed as another intent, you can adjust the fallback intent’s sensitivity level in the interaction model:

 {
 "interactionModel"​: {
 "languageModel"​: {
 "invocationName"​: ​"star port seventy five"​,
 "intents"​: [
  ...
  ],
 "types"​: [],
»"modelConfiguration"​: {
»"fallbackIntentSensitivity"​: {
»"level"​: ​"HIGH"
» }
» }
  }
  }
 }

The fallback intent sensitivity level can be set to “LOW”, “MEDIUM”, or “HIGH”. The default sensitivity level is “LOW”. Raising it will cause more utterances to be routed to the fallback intent handler.

Up to this point, we’ve been hard-coding all of the text that Alexa will speak in the request handler. That will work fine, but we can do better. Let’s see how to extract those strings so that they can be managed independent of the request handler.

Externalizing Strings

Right now, while our skill is rather basic, it seems natural to just hard-code the response text in the handle() functions as we have done. But as our skill evolves, it will be helpful to manage all of those strings in one place. And when we get to Chapter 8, Localizing Responses, the responses will be different depending on the user’s preferred language and hardcoded strings won’t be an option.

To help with externalization of strings, we’ll add the i18next module to our project:

 $ ​​npm​​ ​​install​​ ​​--prefix​​ ​​lambda​​ ​​i18next

This module enables internationalization of string values. Even though we’re only supporting English output at this point, it’s easier to externalize the strings now while there are only a few strings to manage than to wait until we have several strings to extract.

To use the i18next module, we’ll first need to require() it in index.js:

 const​ i18next = require(​'i18next'​);
 const​ languageStrings = require(​'./languageStrings'​);

In addition to the i18next module, we also require() a local module named languageStrings which will contain the externalized string values. We’ll define that module in a moment, but first let’s use both of these modules to extend the handlerInput object with a utility function to lookup a string by name:

 const​ LocalisationRequestInterceptor = {
  process(handlerInput) {
  i18next.init({
  lng: Alexa.getLocale(handlerInput.requestEnvelope),
  resources: languageStrings
  }).then((i18n) => {
  handlerInput.t = (...args) => i18n(...args);
  });
  }
 };

LocalisationRequestInterceptor is a request interceptor that adds a t() function to handlerInput. When a request comes into our skill, this interceptor’s process() function will use the user’s locale and the languageStrings module to initialize the i18next module and then use the resulting i18n function to lookup a string from the externalized file.

The interceptor code is brief, but a little complex. But when you see it in action, it will make more sense.

So that the interceptor can do its job, we’ll need to register it with the skill builder:

 exports.handler = Alexa.SkillBuilders.custom()
 
  ...
 
  .addRequestInterceptors(
  LocalisationRequestInterceptor
  )
  .lambda();

Revisiting the intent handler for HelloWorldIntent, we can use the new t() function to lookup a the text with a key of “HELLO_MSG”:

 const​ HelloWorldIntentHandler = {
  canHandle(handlerInput) {
 return​ Alexa.getRequestType(handlerInput.requestEnvelope)
  === ​'IntentRequest'
  && Alexa.getIntentName(handlerInput.requestEnvelope)
  === ​'HelloWorldIntent'​;
  },
  handle(handlerInput) {
»const​ speakOutput = handlerInput.t(​'HELLO_MSG'​);
 return​ handlerInput.responseBuilder
  .speak(speakOutput)
  .getResponse();
  }
 };

When handlerInput.t() is called, it will return the greeting as defined in languageStrings.js under the “HELLO_MSG” key:

 module.exports = {
  en: {
  translation: {
  WELCOME_MSG: ​'Welcome to Star Port 75 Travel. How can I help you?'​,
» HELLO_MSG: ​'Have a stellar day!'​,
  HELP_MSG: ​'You can say hello to me! How can I help?'​,
  GOODBYE_MSG: ​'Goodbye!'​,
  REFLECTOR_MSG: ​'You just triggered {{intentName}}'​,
  FALLBACK_MSG: ​'Sorry, I don​​'​​t know about that. Please try again.'​,
  ERROR_MSG: ​'Sorry, I had trouble doing what you asked. '​ +
 'Please try again.'
  }
  }
  }

Notice that the top-level key in languageStrings.js is “en”, which is shorthand for “English”. Later we may add additional languages to this file, but for now English is all we need. Underneath the “en” key is a set of strings for all of the intents in our skill. Specifically, the “HELLO_MSG” entry is mapped to the message we want Alexa to speak to users when the HelloWorldIntent is handled. We can apply the other strings in the skill’s other request handlers the same way, by calling handlerInput.t() instead of hard-coding the responses.

Now we’re ready to deploy our skill and kick the tires.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset