Stacking

Moving on to more complex ensembles, we will utilize stacking to combine basic regressors more efficiently. Using StackingRegressor from Chapter 4Stacking, we will try to combine the same algorithms as we did with voting. First, we modify the predict function of our ensemble (to allow for single-instance prediction) as follows:

 # Generates the predictions
def predict(self, x_data):

# Create the predictions matrix
predictions = np.zeros((len(x_data), len(self.base_learners)))

names = list(self.base_learners.keys())

# For each base learner
for i in range(len(self.base_learners)):
name = names[i]
learner = self.base_learners[name]

# Store the predictions in a column
preds = learner.predict(x_data)
predictions[:,i] = preds

# Take the row-average
predictions = np.mean(predictions, axis=1)
return predictions

Again, we modify the code to use the stacking regressor, as follows:

base_learners = [[SVR(), LinearRegression(), KNeighborsRegressor()],
[LinearRegression()]]
lr = StackingRegressor(base_learners)

In this setup, the ensemble yields a model with an MSE of 16.17 and a Sharpe value of 0.21.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset