Corresponding machine learning algorithm – the Naive Bayes Classifier

In the preceding example, we showed you how to calculate a post-test probability given a pretest probability, a likelihood, and a test result. The machine learning algorithm known as the Naive Bayes Classifier does this for every feature sequentially for a given observation. For example, in the preceding example, the post-test probability was 14.3%. Let's pretend that the patient now has a troponin drawn and it is elevated. 14.3% now becomes the pretest probability, and a new post-test probability is calculated based on the contingency table for troponin and MI, where the contingency tables are obtained from the training data. This is continued until all the features are exhausted. Again, the key assumption is that each feature is independent of all others. For the classifier, the category (outcome) having the highest post-test probability is assigned to the observation.

The Naive Bayes Classifier is popular for a select group of applications. Its advantages include high interpretability, robustness to missing data, and ease/speed for training and predicting. However, its assumptions make the model unable to compete with more state-of-the-art algorithms.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset