Training the model

Next, we need to train our model with a sample set of data. We will again be using the MNIST set of handwritten digits; this is easy, free, and convenient. Get back into the code listing and continue the exercise as follows:

  1. Pick up where we left off and locate the following section of code:
from tensorflow.keras.datasets import mnist
import numpy as np
(x_train, _), (x_test, _) = mnist.load_data()

  1. We start by importing the mnist library and numpy then loads the data into x_train and x_test sets of data. As a general rule in data science and machine learning, you typically want a training set for learning and then an evaluation set for testing. These datasets are often generated by randomly splitting the data into 80 percent for training and 20 percent for testing.
  2. Then we further define our training and testing inputs with the following code:
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:])))
x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:])))
print( x_train.shape)
print( x_test.shape)
  1. The first two lines are normalizing our input gray scale pixel color values and a number from 0 to 255, by dividing by 255. This gives us a number from 0 to 1. We generally want to try to normalize our inputs. Next, we reshape the training and testing sets into an input Tensor
  2. With the models all built and compiled, it is time to start training. The next few lines are where the network will learn how to encode and decode the images:
autoencoder.fit(x_train, x_train, epochs=50, batch_size=256,
shuffle=True, validation_data=(x_test, x_test))

encoded_imgs = encoder.predict(x_test)
decoded_imgs = decoder.predict(encoded_imgs)
  1. You can see in our code that we are setting up to fit the data using x_train as input and output. We are using 50 epochs with a batch size of 256 images. Feel free to play with these parameters on your own later to see what effect they have on training. After that, the encoder and then the decoder models are used to predict test images.

That completes the model and training setup we need for this model, or models if you will. Remember, we are taking a 28 x 28 image, decompressing it to essentially 32 numbers, and then rebuilding the image using a neural network. With our model complete and trained this time, we want to review the output and we will do that in the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset