Wrapping Up

The Alexa Presentation Language is a JSON-based template language for skills to render visual user-interfaces to screen-enabled Alexa devices. APL offers complete flexibility in designing any visual user-interface you can imagine.

APL offers the option of extracting style details into stylesheets that are roughly analogous to how CSS is used to style HTML. This enables common styles to be defined in one place and reused for many components, and it creates a semantic link between the styles and the components that they are applied to.

Resource values offer a way to create named values, similar to how constants are created in many programming languages. The named values can represent dimensions, colors, or arbitrary strings. Those named values can then be referenced in component properties and styles, offering many of the same benefits of styles and be used for properties that aren’t supported in styles.

A skill’s request handlers can pass data they receive, lookup, or calculate in model data to be rendered in the APL templates. This makes for a dynamic experience in visual user-interfaces.

Touch events add another dimension to Alexa’s visual user interface when a user interacts with a screen-enabled device. The TouchWrapper component can be placed around any other component in an APL template to fire events back to the skill’s fulfillment code when the user physically touches the screen. In the fulfillment code, touch event handlers are written much the same as the handlers for other kinds of Alexa requests such as intent requests.

So far, all of the interaction between a user and our skill has been initiated by the user when speaking to Alexa. Coming up in the next chapter, we’re going to flip that around and look at sending reminders and notifications—ways to have Alexa initiate communication without first being spoken to.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset