What about cross-validation?

Often, in the case of smaller datasets, data scientists employ a technique known as cross-validation, which is also available to you in Spark. The CrossValidator class starts by splitting the dataset into N-folds (user declared) - each fold is used N-1 times as part of the training set and once for model validation. For example, if we declare that we wish to use a 5-fold cross-validation, the CrossValidator class will create five pairs (training and testing) of datasets using four-fifths  of the dataset to create the training set with the final fifth as the test set, as shown in the following figure.

The idea is that we would see the performance of our algorithm across different, randomly sampled datasets to account for the inherent sampling bias when we create our training/testing split on 80% of the data. An example of a model that does not generalize well would be one where the accuracy - as measured by overall error, for example - would be all over the map with wildly different error rates, which would suggest we need to rethink our model.

Figure 8 - Conceptual schema of 5-fold cross-validation.

There is no set rule on how many folds you should perform, as these questions are highly individual with respect to the type of data being used, the number of examples, and so on. In some cases, it makes sense to have extreme cross-validation where N is equal to the number of data points in the input dataset. In this case, the Test set contains only one row. This method is called as Leave-One-Out (LOO) validation and is more computationally expensive.

In general, it is recommended that you perform some cross-validation (often 5-folds, or 10-folds cross-validation is recommended) during the model construction to validate the quality of a model - especially when the dataset is small.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset