Training and evaluating the network

We start by create a session to run the graph we created. Note that for faster training, we should use a GPU. However, if you do not have a GPU, just set log_device_placement=False:

session = tf.Session(graph=graph, config=tf.ConfigProto(log_device_placement=True))
session.run(init_op)
for i in range(300):
_, loss_value = session.run([train, loss_op], feed_dict={images_X: images_array, labels_X:
labels_array})
if i % 10 == 0:
print("Loss: ", loss_value)
>>>
Loss: 4.7910895
Loss: 4.3410876
Loss: 4.0275432
...
Loss: 0.523456

Once the training is completed, let us pick 10 random images and see the predictive power of our model:

random_indexes = random.sample(range(len(images32)), 10)
random_images = [images32[i]
for i in random_indexes]
random_labels = [labels[i]
for i in random_indexes]

Then let's run the predicted_labels op:

predicted = session.run([predicted_labels], feed_dict={images_X: random_images})[0]
print(random_labels)
print(predicted)
>>>
[38, 21, 19, 39, 22, 22, 45, 18, 22, 53]
[20 21 19 51 22 22 45 53 22 53]

So we can see that some images were correctly classified and some wrongly. However, visual inspection would be more helpful. So let's display the predictions and the ground truth:

fig = plt.figure(figsize=(5, 5))
for i in range(len(random_images)):
truth = random_labels[i]
prediction = predicted[i]
plt.subplot(5, 2,1+i)
plt.axis('off')color='green'
if truth == prediction
else
'red'plt.text(40, 10, "Truth: {0}nPrediction: {1}".format(truth, prediction), fontsize=12,
color=color)
plt.imshow(random_images[i])
>>>

Finally, we can evaluate our model using the test set. To see the predictive power, we compute the accuracy:

# Load the test dataset.
test_X, test_y = load_data(test_data_dir)

# Transform the images, just as we did with the training set.
test_images32 = [skimage.transform.resize(img, (32, 32), mode='constant')
for img in test_X]
display_images_and_labels(test_images32, test_y)

# Run predictions against the test
setpredicted = session.run([predicted_labels], feed_dict={images_X: test_images32})[0]

# Calculate how many matches
match_count = sum([int(y == y_) for y, y_ in zip(test_y, predicted)])
accuracy = match_count / len(test_y)print("Accuracy: {:.3f}".format(accuracy))
>>
Accuracy: 87.583

Not that bad in terms of accuracy. In addition to this, we can also compute other performance metrics such as precision, recall, f1 measure and also visualize the result in a confusion matrix to show the predicted versus actual labels count. Nevertheless, we can still improve the accuracy by tuning the network and hyperparameters. But I leave these up to the readers.

Finally, we are done, so let's close the TensorFlow session:

session.close()
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset