Returning Simple Cards

In a desktop or web application there are visual cues to help a user figure out what their options are. The user can see icons, buttons, and menu items and make reasonable guesses about how an application works. But in a voice application, there aren’t many visual cues and the user may need to ask for help. That’s where the built-in AMAZON.HelpIntent intent comes into play.

When the user says “help,” the skill’s handler for the AMAZON.HelpIntent will be invoked and respond with some hints for what the user can say to use the skill. Using cards, we can also provide help in their companion application. We’ll start by adding a simple card to the response returned from the intent handler for AMAZON.HelpIntent that displays the same text that Alexa will speak:

 return​ handlerInput.responseBuilder
  .speak(speakOutput)
  .reprompt(speakOutput)
» .withSimpleCard(​"How to use this skill"​, speakOutput)
  .getResponse();

Cards aren’t returned in every response. But by calling withSimpleCard() on the response builder, we’re asking for the response to include a card whose title is “How to use this skill” and whose content is “You can say hello to me! How can I help?” When displayed in the companion application, it might look like this:

images/visual/SimpleCardNaive.png

That’s a good start, but it’s not very helpful. Even though it will respond to an utterance of “hello,” that’s far from all our skill will do. The skill’s main function is to schedule interplanetary travel. Therefore, the response—both spoken and displayed in the card—should be more appropriate to scheduling space travel.

Before we make changes to the intent handler, let’s add expectations about the card to the test for the help intent:

 ---
 - test : ​Launch and ask for help
 - LaunchRequest : ​welcomeMessage
 - ​AMAZON.HelpIntent​ :
  - prompt : ​helpMessage
» - cardTitle : ​helpCardTitle
» - cardContent : ​helpCardContent

Here we’re using the cardTitle and cardContent properties to express what we expect for the card in the response. Because the test is multi-locale, the actual expected values are externalized in locale-specific files. For instance, here are the English values for helpCardTitle and helpCardContent in locales/en.yml:

 helpMessage: |
  You can ask me to plan a trip to any planet in our solar
  system. For example, you can say "Plan a trip to Jupiter. What
  do you want to do?
»helpCardTitle: ​How to use this skill
»helpCardContent: |
» Ask me to plan a trip to any planet in our solar
» system. For example, you could say: "Plan a trip to Jupiter".
» "Schedule a trip to Mars" "Plan a trip to Venus leaving next
» Monday"

Similarly, the Spanish values will need to be added to locales/es.yml:

 helpMessage: |
  Pídeme que planee un viaje a cualquier planeta
  de nuestro sistema solar. Por ejemplo, podría decir
  Planifique un viaje a Júpiter". ¿Qué quieres hacer?
»helpCardTitle: ​Cómo usar esta habilidad
»helpCardContent: |
» Pídeme que planee un viaje a cualquier planeta
» de nuestro sistema solar. Por ejemplo, podría decir
» Planifique un viaje a Júpiter".

Notice that in addition to adding locale-specific strings for the card title and content, we’ve also changed the expected helpMessage string to be more fitting for planning a trip through the solar system.

You should also note that the spoken text will be different from what is displayed on the card. Cards are quite useful for offering additional information beyond what is spoken. In this case, the card offers more examples than the spoken text.

In order to make those tests pass, we’ll need to change the intent handler to return those messages in a simple card. First, we’ll edit languageStrings.js, changing the value for HELP_MSG and adding entries for the card title and content:

 module.exports = {
  en: {
  translation: {
  ...
  HELP_MSG: ​'You can ask me to plan a trip to any planet in our solar '​ +
 'system. For example, you can say "Plan a trip to Jupiter." What '​ +
 'do you want to do?'​,
  HELP_CARD_TITLE: ​'How to use this skill'​,
  HELP_CARD_MSG: ​'Ask me to plan a trip to any planet in our solar '​ +
 'system.​​​​nFor example, you could say:​​​​n"Plan a trip to Jupiter"'​ +
 '​​​​n"Schedule a trip to Mars"​​​​n"Plan a trip to Venus leaving '​ +
 'next Monday"'​,
  ...
  }
  },
  es: {
  translation: {
  ...
  HELP_MSG: ​'Pídeme que planee un viaje a cualquier planeta '​ +
 'de nuestro sistema solar. Por ejemplo, podría decir '​ +
 'Planifique un viaje a Júpiter". ¿Qué quieres hacer? '​,
  HELP_CARD_TITLE: ​'Cómo usar esta habilidad'​,
  HELP_CARD_MSG: ​'Pídeme que planee un viaje a cualquier planeta '​ +
 'de nuestro sistema solar. Por ejemplo, podría decir:​​​​n'​+
 '"Planifique un viaje a Júpiter"​​​​n"Programe un viaje a Marte"'​ +
 '​​​​n"Planifique un viaje a Venus que saldrá el próximo lunes".'​,
  ...
  }
  }
 }

Now we just need to tweak the intent handler to use those strings in the call to withSimpleCard():

 const​ cardTitle = handlerInput.t(​'HELP_CARD_TITLE'​);
 const​ cardContent = handlerInput.t(​'HELP_CARD_MSG'​);
 
 return​ handlerInput.responseBuilder
  .speak(speakOutput)
  .reprompt(speakOutput)
» .withSimpleCard(cardTitle, cardContent)
  .getResponse();
 }

After deploying the skill, invoke it by saying, “Alexa, open Star Port Seventy Five Travel,” and then say “help.” Alexa should speak the new, more relevant help message. But also, if you open the companion application on your phone, you’ll see the help card. On the iOS companion application, it might look like the screenshot.

images/visual/SimpleCardSmart.png

Our help intent handler is much more useful now. And, if the user opens their companion application, they’ll be shown a card with additional help information. Let’s add another card to our skill, but this time with an image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset