Another approach to building ensembles is through boosting. Boosting models use multiple individual learners in sequence to iteratively boost the performance of the ensemble.
Typically, the learners used in boosting are relatively simple. A good example is a decision tree with only a single node—a decision stump. Another example could be a simple linear regression model. The idea is not to have the strongest individual learners, but quite the opposite—we want the individuals to be weak learners so that we get a superior performance only when we consider a large number of individuals.
At each iteration of the procedure, the training set is adjusted so that the next classifier is applied to the data points that the preceding classifier got wrong. Over multiple iterations, the ensemble is extended with a new tree (whichever tree optimized the ensemble performance score) at each iteration.