Combining multiple methods

We now combine the aforementioned methods into a single prediction. This seems intuitively a good idea, but how can we do this in practice? Perhaps the first thought that comes to mind is that we can average the predictions. This might give decent results, but there is no reason to think that all estimated predictions should be treated the same. It might be that one is better than the others.

We can try a weighted average, multiplying each prediction by a given weight before summing it all up. How do we find the best weights, though? We learn them from the data, of course!

Ensemble learning:
We are using a general technique in machine learning that is not just applicable in regression: ensemble learning. We learn an ensemble (that is, a set) of predictors. Then, we combine them to obtain a single output. What is interesting is that we can see each prediction as being a new feature, and we are now just combining features based on training data, which is what we have been doing all along. Note that we are doing this for regression here, but the same reasoning is applicable to classification: you learn several classifiers, then a master classifier, which takes the output of all of them and gives a final prediction. Different forms of ensemble learning differ in how you combine the base predictors.

 

In order to combine the methods, we will use a technique called stacked learning. The idea is that you learn a set of predictors, then you use the output of these predictors as features for another predictor. You can even have several layers, where each layer learns by using the output of the previous layer as features for its prediction. Have a look at the following diagram:

In order to fit this combination model, we will split the training data into two. Alternatively, we could have used cross-validation (the original stacked-learning model worked like this). However, in this case, we have enough data to obtain good estimates by leaving some aside.

Just as when we were fitting hyperparameters, we need two layers of training/testing splits: a first, higher-level split, and then inside the training split, a second split that can fit the stacked learner. This is analogous to how we use multiple levels of cross-validation when using an inner cross-validation loop to find hyperparameter values:

train,test = get_train_test(random_state=12) 
# Now split the training again into two subgroups 
tr_train,tr_test = load_ml100k.get_train_test(train) 
 
tr_predicted0 = predict_positive_nn(tr_train) 
tr_predicted1 = predict_positive_nn(tr_train.T).T 
tr_predicted2 = predict_regression(tr_train) 
tr_predicted3 = predict_regression(tr_train.T).T 
# Now assemble these predictions into a single array: 
stack_tr = np.array([ 
    tr_predicted0[tr_test > 0], 
    tr_predicted1[tr_test > 0], 
    tr_predicted2[tr_test > 0], 
    tr_predicted3[tr_test > 0], 
    ]).T 
 
# Fit a simple linear regression 
lr = linear_model.LinearRegression() 
lr.fit(stack_tr, tr_test[tr_test > 0]) 

Now, we apply the whole process to the testing split and evaluate:

stack_te = np.array([ 
    tr_predicted0.ravel(), 
    tr_predicted1.ravel(), 
    tr_predicted2.ravel(), 
    tr_predicted3.ravel(), 
    ]).T 
predicted = lr.predict(stack_te).reshape(train.shape) 

Evaluation is as before:

r2 = metrics.r2_score(test[test > 0], predicted[test > 0]) 
print('R2 stacked: {:.2%}'.format(r2)) 
R2 stacked: 33.15% 

The result of the stacked learning is better than any single method achieved. It is quite typical that combining methods is a simple way to obtain a small performance boost, but that the results are not earth shattering.

By having a flexible way to combine multiple methods, we can simply try any idea we wish by adding it into the mix of learners and letting the system fold it into the prediction. We can, for example, replace the neighborhood criterion in the nearest-neighbor code.

However, we do have to be careful to not overfit our dataset. In fact, if we randomly try too many things, some of them will work well on a particular dataset, but will not generalize. Even though we are splitting our data, we are not rigorously cross-validating our design decisions. In order to have a good estimate, and if data is plentiful, you should leave a portion of the data untouched until you have a final model that is about to go into production. Then, testing your model on this held-out data will give you an unbiased prediction of how well you should expect it to work in the real world.

Of course, collaborative filtering also works with neural networks, but don't forget to keep validation data available for the testing—or, more precisely, validating—your ensemble model.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset