Wrapping Up

Even though you’ll probably start developing your skill for a single language, eventually you may want to expand your reach and open up your skill to other languages. From the very beginning, our skill has been using the i18next library to externalize response string from intent handlers, establishing a foundation upon which to build support for multiple languages.

What Alexa says is only half of the localization story, however. We also must customize our interaction model to support other languages so that she can hear and understand utterances in other languages. And, while translating our interaction model’s utterances, we can also translate the prompts that Alexa will speak as part of a dialog.

The BST testing tool enables us to test multi-language support by extracting the expected response text into locale-specific mappings, referring to those mappings in a locale-agnostic way. When the tests are run, BST will execute each test multiple times, once for each supported locale.

Finally, using SSML’s <lang> and <voice> tags, we can fine-tune our skill’s responses so that Alexa correctly pronounces non-English words or even apply a completely different voice that speaks that language natively.

We’ve spent the past few chapters focusing on sound and how Alexa speaks. But while Alexa applications are meant to be voice-first, they can also provide visual complements to voice-first interactions. In the next chapter, we’re going to look at how to return visual feedback using cards, simple components that are displayed in the Alexa companion application that are installed on a user’s mobile device.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset