How it works...

In this recipe, we estimated the error rates of four different classifiers using the errorest function from the ipred package. We compared the boosting, bagging, and random forest methods, and the single decision tree classifier. The errorest function performs a 10-fold cross-validation on each classifier and calculates the misclassification error. The estimation results from the four chosen models reveal that the boosting method performs the best with the lowest error rate (0.0475). The random forest method has the second lowest error rate (0.051), while the bagging method has an error rate of 0.0583. The single decision tree classifier, rpart, performs the worst among the four methods with an error rate equal to 0.0674. These results show that all three ensemble learning methods, boosting, bagging, and random forest, outperform a single decision tree classifier.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset