In the Getting classification straight with the confusion matrix recipe, you learned that we can label classified samples as true positives, false positives, true negatives, and false negatives. With the counts of these categories, we can calculate many evaluation metrics of which we will cover four in this recipe, as given by the following equations:
These metrics range from zero to one, with zero being the worst theoretical score and one being the best. Actually, the worst score would be the one we get by random guessing. The best score in practice may be lower than one because in some cases we can only hope to emulate human performance, and there may be ambiguity about what correct classification should be, for instance, in the case of sentiment analysis (covered in the Python Data Analysis book).
import numpy as np from sklearn import metrics import ch10util import dautil as dl from IPython.display import HTML
y_test = np.load('rain_y_test.npy') accuracies = [metrics.accuracy_score(y_test, preds) for preds in ch10util.rain_preds()] precisions = [metrics.precision_score(y_test, preds) for preds in ch10util.rain_preds()] recalls = [metrics.recall_score(y_test, preds) for preds in ch10util.rain_preds()] f1s = [metrics.f1_score(y_test, preds) for preds in ch10util.rain_preds()]
sp = dl.plotting.Subplotter(2, 2, context) ch10util.plot_bars(sp.ax, accuracies) sp.label() ch10util.plot_bars(sp.next_ax(), precisions) sp.label() ch10util.plot_bars(sp.next_ax(), recalls) sp.label() ch10util.plot_bars(sp.next_ax(), f1s) sp.label() sp.fig.text(0, 1, ch10util.classifiers()) HTML(sp.exit())
Refer to the following screenshot for the end result:
The code is in the precision_recall.ipynb
file in this book's code bundle.
precision_score()
function documented at http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html (retrieved November 2015)recall_score()
function documented at http://scikit-learn.org/stable/modules/generated/sklearn.metrics.recall_score.html (retrieved November 2015)f1_score()
function documented at http://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html (retrieved November 2015)