Strengths and weaknesses

Boosting algorithms are able to reduce both bias and variance. For a long time, they were considered immune to overfitting, but in fact they can overfit, although they are extremely robust. One possible explanation is that the base learners, in order to classify outliers, create very strong and complicated rules that rarely fit other instances. In the following diagram, an example is depicted. The ensemble has generated a set of rules in order to correctly classify the outlier, but the rules are so strong that only an identical example (that is, with the exact same feature values) could fit into the sub-space defined by the rules:

Generated rules for an outlier

One disadvantage of many boosting algorithms is that they are not easily parallelized, as the models are created in a sequential fashion. Furthermore, they pose the usual problems of ensemble learning techniques, such as reduction in interpretability and additional computational costs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset