Training a model

Now that we have a working model, it is time to put it into action.

Training and publishing the model

The first step to using the model is to make sure that the model has some utterances to work with. Until now, we have added one utterance per intent. Before we deploy the application, we need more.

Think of three to four different ways to set or get the room temperature and add them, specifying the entities and intents. Also, add a couple of utterances that fall into the None intent, just for reference.

When we have added some new utterances, we need to train the model. Doing so will make LUIS develop code to recognize the relevant entities and intents in the future. This process is done periodically; however, it is wise to do it whenever you have made changes, before publication. This can be done by clicking Train in the top menu.

To test the application, you can simply enter test sentences in the Interactive Testing tab. This will show you how any given sentence is labeled, and what intents the service has discovered, as shown in the following screenshot:

Training and publishing the model

With the training completed, we can publish the application. This will deploy the models to an HTTP endpoint, which will interpret the sentences that we send to it.

Select Publish from the left-hand menu. This will present you with the following screen:

Training and publishing the model

Click on the Publish button to deploy the application. The URL beneath the Endpoint url settings field is the endpoint where the model is deployed. As you can see, it specifies the application ID, as well as the subscription key.

Before we go any further, we can verify that the endpoint actually works. You can do this by entering a query into the text field (for instance, get the bedroom temperature) and clicking on the link. This should present you with something similar to the following screenshot:

Training and publishing the model

When the model has been published, we can move on to access it through the code.

Connecting to the smart house application

To be able to easily work with LUIS, we will want to add the NuGet client package. In the smart house application, go to the NuGet package manager and find the Microsoft.Cognitive.LUIS package. Install this into the project.

We will need to add a new class called Luis. Place the file under the Model folder. This class will be in charge of calling the endpoint and processing the result.

As we will need to test this class, we will need to add a View and a ViewModel. Add the file LuisView.xaml to the View folder, and add LuisViewModel.cs to the ViewModel folder.

The View should be rather simple. It should contain two TextBox elements, one for inputting requests and the other for displaying results. We also need a button to execute commands.

Add the View as a TabItem in the MainView.xaml file.

The ViewModel should have two string properties, one for each of the TextBox elements. It will also need an ICommand property for the button command.

We will create the Luis class first, so open the Luis.cs file. Make the class public.

When we have made requests and received the corresponding result, we want to trigger an event to notify the UI. We want some additional arguments with this event, so, below the Luis class, create a LuisUtteranceResultEventArgs class that inherits from the EventArgs class, as follows:

    public class LuisUtteranceResultEventArgs : EventArgs {
        public string Status { get; set; }
        public string Message { get; set; }
        public bool RequiresReply { get; set; }
    }

This will contain a Status string, a Message status, and the Result itself. Go back to the Luis class and add an event and a private member, as follows:

    public event EventHandler<LuisUtteranceResultEventArgs> OnLuisUtteranceResultUpdated;

    private LuisClient _luisClient;

We have already discussed the event. The private member is the API access object, which we installed from NuGet:

    public Luis(LuisClientluisClient) {
        _luisClient = luisClient;
    }

The constructor should accept the LuisClient object as a parameter and assign it to the member we previously created.

Let's create a helper method to raise the OnLuisUtteranceResultUpdated event, as follows:

private void RaiseOnLuisUtteranceResultUpdated( LuisUtteranceResultEventArgsargs)
{
    OnLuisUtteranceResultUpdated?.Invoke(this, args);
}

This is purely for our own convenience.

To be able to make requests, we will create a function called RequestAsync. This will accept a string as a parameter and have Task as the return type. The function should be marked as async, as follows:

    public async Task RequestAsync(string input) {
        try {
            LuisResult result = await _luisClient.Predict(input);

Inside the function, we make a call to the Predict function of _luisClient. This will send a query to the endpoint we published earlier. A successful request will result in a LuisResult object that contains some data, which we will explore shortly.

We use the result in a new function, where we process it. We make sure that we catch any exceptions and notify any listeners about it using the following code:

            ProcessResult(result);
        }
        catch(Exception ex) {
            RaiseOnLuisUtteranceResultUpdated(new LuisUtteranceResultEventArgs
            {
                Status = "Failed",
                Message = ex.Message
            });
        }
    }

In the ProcessResult function, we create a new object of the LuisUtteranceResultEventArgs type. This will be used when notifying listeners of any results. In this argument object, we add the Succeeded status and the result object. We also write out a message, stating the top identified intent. We also add the likelihood of this intent being the top one out of all the intents we have. Finally, we also add the number of intents identified:

    private void ProcessResult(LuisResult result) {
        LuisUtteranceResultEventArgsargs = new LuisUtteranceResultEventArgs();

        args.Result = result;
        args.Status = "Succeeded";
        args.Message = $"Top intent is {result.TopScoringIntent.Name} with score {result.TopScoringIntent.Score}. Found {result.Entities.Count} entities.";

        RaiseOnLuisUtteranceResultUpdated(args);
    }

With that in place, we head to our view model. Open the LuisViewModel.cs file. Make sure that the class is public and that it inherits from the ObservableObject class.

Declare a private member, as follows:

    private Luis _luis;

This will hold the Luis object we created earlier:

    public LuisViewModel() {
        _luis = new Luis(new LuisClient("APP_ID_HERE", "API_KEY_HERE"));

Our constructor creates the Luis object, making sure it is initialized with a new LuisClient. As you may have noticed, this requires two parameters, the application ID and the subscription ID. There is also a third parameter, preview, but we will not need to set it at this time.

The application ID can be found either by looking at the URL in the publishing step or by going to Settings on the application's site at https://www.luis.ai. There, you will find the Application ID, as shown in the following screenshot:

Connecting to the smart house application

With the Luis object created, we complete the constructor as follows:

    _luis.OnLuisUtteranceResultUpdated += OnLuisUtteranceResultUpdated;
    ExecuteUtteranceCommand = new DelegateCommand(ExecuteUtterance, CanExecuteUtterance);
}

This will hook up the OnLuisUtteranceResultUpdated event and create a new DelegateCommand event for our button. For our command to be able to run, we need to check that we have written some text in the input field. This is done using CanExecuteUtterance.

The ExecuteUtterance command is itself rather simple, as shown in the following code:

    private async void ExecuteUtterance(object obj) {
        await _luis.RequestAsync(InputText);
    }

All we do is make a call to the RequestAsync function in the _luis object. We do not need to wait for any results, as these will be coming from the event.

The event handler, OnLuisUtteranceResultUpdated, will format the results and print them to the screen.

First, we make sure that we invoke the methods in the current dispatcher thread. This is done as the event is triggered in another thread. We create a StringBuilder, which will be used to concatenate all the results, as shown in the following code:

private void OnLuisUtteranceResultUpdated(object sender, LuisUtteranceResultEventArgs e) {
    Application.Current.Dispatcher.Invoke(() => {
        StringBuilder sb = new StringBuilder();

First, we append the Status and the Message status. We then check to see if we have any entities that were detected and append the number of entities, as follows:

    sb.AppendFormat("Status: {0}
", e.Status);
    sb.AppendFormat("Summary: {0}

", e.Message);

    if(e.Result.Entities != null&&e.Result.Entities.Count != 0) {
        sb.AppendFormat("Entities found: {0}
", e.Result.Entities.Count);
        sb.Append("Entities:
");

If we do have any entities, we loop through each of them, printing out the entity name and the value:

        foreach(var entities in e.Result.Entities) {
            foreach(var entity in entities.Value) {
                sb.AppendFormat("Name: {0}	Value: {1}
",
                                 entity.Name, entity.Value);
            }
        }
        sb.Append("
");
    }

Finally, we add StringBuilder to our ResultText string, which should display it on screen, as follows:

            ResultText = sb.ToString();
        });
    }

With everything having compiled, the result should look something like the following screenshot:

Connecting to the smart house application

Model improvement through active usage

LUIS is a machine learning service. The applications we create, and the models that are generated, can therefore improve based on use. Throughout the development, it is a good idea to keep an eye on the performance. You may notice some intents that are often mislabeled, or entities that are hard to recognize.

Visualizing performance

On the LUIS website, the dashboard displays information about intent and entity breakdowns. This is basically information on how the intents and entities are distributed across the utterances that have been used.

The following diagram shows what the intent breakdown display looks like:

Visualizing performance

The following diagram shows what the entity breakdown looks like:

Visualizing performance

By hovering the mouse over the different bars (or sectors of the pie chart), the name of the intent/entity will be displayed. In addition, the percentage number of the total number of intents/entities in use is displayed.

Resolving performance problems

If you notice an error in your applications, there are typically four options to resolve it:

  • Adding model features
  • Adding labeled utterances
  • Looking for incorrect utterance labels
  • Changing the schema

We will now look briefly at each of these.

Adding model features

Adding model features is typically something we can do if we have phrases that should be detected as entities, but are not. We have already seen an example of this with the room entity, where one room could be the living room.

The solution is, of course, to add phrase lists or regex features. There are three scenarios where this will likely help:

  • When LUIS fails to see words or phrases that are similar.
  • When LUIS has trouble identifying entities. Adding all possible entity values in a phrase list should help.
  • When rare or proprietary words are used.

Adding labeled utterances

Adding and labeling more utterances will always improve performance. This will most likely help in the following scenarios:

  • When LUIS fails to differentiate between two intents
  • When LUIS fails to detect entities between surrounding words
  • If LUIS systematically assigns low scores to an intent

Looking for incorrect utterance labels

A common mistake is mislabeling an utterance or entity. In such cases, you will need to find the incorrect utterance and correct it. This will likely resolve problems in the following scenarios:

  • If LUIS fails to differentiate between two intents, even when similar utterances have been labeled
  • If LUIS consistently misses an entity

Changing the schema

If all the preceding solutions fail and you still have problems with the model, you may consider changing the schema, meaning combining, regrouping, and/or dropping intents and entities.

Keep in mind that if it is hard for humans to label an utterance, it is even harder for a machine.

Active learning

A very nice feature of LUIS is the power of active learning. When we are using the service actively, it will log all queries, and, as such, we will then be able to analyze usage. Doing so allows us to quickly correct errors and label utterances we have not seen before.

Using the application we have built—the smart house application—if we run a query with the utterance can you tell me the bedroom temperature?, the model will likely not recognize this. If we debug the process, stepping through the ProcessResult function, we will see the following values returned:

Active learning

As you can see from the preceding screenshot, the top-scoring intent is None, with a score of 0.61. In addition, no entities have been recognized, so this is not good.

Head back to the LUIS website. Move to the Review endpoint utterances page, which can be found in the left-hand menu. Here, we can see that the utterance we just tried has been added. We can now label the intent and entity correctly, as shown in the following screenshot:

Active learning

By labeling the utterance with the correct intent and entity, we will get a correct result the next time we query in this way, as you can see in the following screenshot:

Active learning
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset