Eliciting Missing Slot Values

Imagine that a user wants to book a weeklong trip to Mars, leaving on July 17th. If the user were to say, “Schedule a trip to Mars, leaving July 17th and returning July 23rd,” then our skill will be happy to oblige, with no further changes to the skill’s code.

However, that requires that the user know exactly how to speak to our skill and what slots the intent is expecting. If the user were to leave out any information, how would the skill respond? For example, what if the user were to say, “I want to go to Mars” without specifying departure and return dates? Let’s run that through the ask dialog command to see what happens:

 $ ​​ask​​ ​​dialog​​ ​​--locale​​ ​​en-US
  User > open star port seventy five
  Alexa > Welcome to Star Port 75 Travel. How can I help you?
  User > Let's plan a trip to Mars
  Alexa > I've got you down for a trip to Mars, leaving on undefined and
  returning undefined.

As you can see, Alexa had no trouble understanding that the user wants to plan a trip to Mars, but because no dates were specified, she has the user leaving on “undefined” dates. While technically correct based on what the user asked for, “undefined” is probably not what the user intended. It’s quite likely that the user expected that Alexa would ask follow-up questions to complete their travel plan. Instead, Alexa used what little information she received and punted on the rest.

But, we can make Alexa ask for missing information by enabling slot elicitation on the slots whose values are required and by declaring prompts which define the follow-up questions. We’ll declare the intent and each of its slots for which we’re defining dialog rules within the interaction model’s dialog property:

 "dialog": { "intents": [ { "name": "ScheduleTripIntent", "slots": [ { "name": "destination", "type": "PLANETS", "elicitationRequired": true, "prompts": { "elicitation": "Slot.Elicitation.ScheduleTrip.Destination" } }, { "name": "departureDate", "type": "AMAZON.DATE", "elicitationRequired": true, "prompts": { "elicitation": "Slot.Elicitation.ScheduleTrip.DepartureDate" } }, { "name": "returnDate", "type": "AMAZON.DATE", "elicitationRequired": true, "prompts": { "elicitation": "Slot.Elicitation.ScheduleTrip.ReturnDate" } } ] } ] },

On the surface, this looks almost as if we’re repeating the intent definition that we’ve already declared in languageModel. But it is subtly different. In the list of slots, we still must specify each slot’s name and type, both of which must match what was declared in the language model. But instead of defining synonyms or other concepts pertaining to the language model, this is where we should declare dialog rules such as elicitation, validation, and confirmation.

The elicitationRequired property is what enables elicitation on each slot. When it’s set to true, Alexa will prompt the user for the missing information if no value for that slot was given. The prompts property is where we define the prompts that Alexa will speak to the user during the dialog. In this case, the elicitation prompt specifies how Alexa will ask the user for missing information if the user fails to specify the destination, departure date, or return date.

The elicitation property does not carry the actual text that Alexa will speak to the user, however. Instead, it contains a reference to a prompt defined in the prompts section of the interaction model. For example, if the return data isn’t specified, then Alexa will ask the user to provide the return date by speaking the prompt whose ID is “Slot.Elicitation.ScheduleTrip.ReturnDate”. But the actual prompt is one of three prompts we will define in the prompts section of the interaction model:

 "prompts": [ { "id": "Slot.Elicitation.ScheduleTrip.Destination", "variations": [ { "type": "PlainText", "value": "Where do you want to go?" }, { "type": "PlainText", "value": "Which planet would you like to visit?" } ] }, { "id": "Slot.Elicitation.ScheduleTrip.DepartureDate", "variations": [ { "type": "PlainText", "value": "When do you want to depart?" }, { "type": "PlainText", "value": "When will your trip start?" } ] }, { "id": "Slot.Elicitation.ScheduleTrip.ReturnDate", "variations": [ { "type": "PlainText", "value": "When do you want to return?" }, { "type": "PlainText", "value": "When will your trip end?" } ] } ]

Notice that each prompt is defined by an ID and a variations property. To make Alexa sound more natural, it’s useful to not have her prompt the user exactly the same way each time she asks a follow-up question. The variations property is an array containing one or more variations for Alexa to choose randomly when she prompts the user. For example, if the user fails to specify the departure date, Alexa may ask “When do you want to depart?” or she may ask “When will your trip start?” Both ask for the same thing, but in different ways. For the sake of brevity, the sample only includes two variations of each prompt, but you may declare more. In fact, the more variations you declare, the more natural conversation with your skill will seem.

The type property for all of the prompts is set to “PlainText”, in which prompts are specified exactly as-is. The other option is “SSML”, which lets you specify richer responses using the Speech Synthesis Markup Language, which we’ll look at in Chapter 6, Embellishing Response Speech.

Now that we’ve defined the dialog rules and the prompts to go with them, we’re almost ready to test the elicitation rules. But first, we need to tell Alexa that we want her to be responsible for all dialog handling. The simplest way to handle incomplete dialog requests is to configure the intent for automatic dialog delegation, by setting the delegationStrategy property to “ALWAYS” on the intent:

 "dialog": {
  "intents": [
  {
  "name": "ScheduleTripIntent",
  "delegationStrategy": "ALWAYS",
 ...
  }
  ]
 },

“ALWAYS” enables automatic dialog delegation. When dialogs are automatically delegated, Alexa will handle all requests with incomplete slot data, without invoking the fulfillment code until all required slots are available. If a user fails to specify a slot value, she will automatically prompt them for the missing information using one of the prompts associated with that slot. The other option for the delegationStrategy property is “SKILL_RESPONSE” (which is also the default value) and indicates that automatic delegation is off and that one or more intent handlers will be involved in handling the incomplete dialog requests. We’ll talk more about how to explicitly handle dialog requests in section Explicitly Handling Dialog Delegation.

If your skill has several intents for which you want automatic delegation enabled, you can optionally specify the delegation strategy for the entire dialog rules configuration instead of for each intent:

 "dialog": {
  "delegationStrategy": "ALWAYS",
  "intents": [
  {
 ...
  }
  ]
 },

You can enable automatic delegation globally for all intents by setting the delegationStrategy property to “ALWAYS” at the dialog rules level. From there, you may choose to disable it for individual intents by setting the same property to “SKILL_RESPONSE” on each intent. Whether you declare the delegation strategy individually for each intent or globally for all intents will depend on if you plan to use explicit dialog handling more often or not. In our skill, either way will work fine at this point, as we have no other intents that accept slots and require dialogs.

Now we’re ready to test our new elicitation rules. Unfortunately, there’s currently no way to automatically test auto-delegated dialogs without deploying the skill using BST or the Alexa Skills Test Framework. Therefore, we’ll need to deploy the skill first and then manually test it.

Once deployed, we can use ask dialog or the Alexa developer console to test the skill. Using the Alexa developer console simulator, the conversation might look like this:

images/dialog/simulator-elicitation.png

That’s a lot better than traveling on some “undefined” dates. As you can see, when the user asked to schedule a trip to Mars, but didn’t specify travel dates, Alexa stepped in, using the prompts defined for those missing slots, and asked the user to provide the missing information.

Slot elicitation ensures that the intent is given all of the information it needs to be handled successfully. But what if the values given don’t make sense? Let’s see how to define slot validation rules.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset