Boosting

The second generative method we will discuss is boosting. Boosting aims to combine a number of weak learners into a strong ensemble. It is able to reduce bias, but also variance. Here, weak learners are individual models that perform slightly better than random. For example, in a classification dataset with two classes and an equal number of instances belonging to each class, a weak learner will be able to classify the dataset with an accuracy of slightly more than 50%.

In this chapter, we will present two classic boosting algorithms, Gradient Boosting and AdaBoost. Furthermore, we will explore the use of scikit-learn implementations for classification and regression. Finally, we will experiment with a recent boosting algorithm and its implementation, XGBoost.

The main topics covered are as follows:

  • The motivation behind using boosting ensembles
  • The various algorithms
  • Leveraging scikit-learn to create boosting ensembles in Python
  • Utilizing the XGBoost library for Python
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset