Building an autoencoder with Keras

While we have covered a lot of important ground we will need for understanding DL, what we haven't done yet is build something that can really do anything. One of the first problems we tackle when starting with DL is to build autoencoders to encode and reform data. Working through this exercise allows us to confirm that what goes into a network can also come back out of a network and essentially reassures us that an ANN is not a complete black box. Building and working with autoencoders also allows us to tweak and test various parameters in order to understand their function. Let's get started by opening up the Chapter_1_5.py listing and following these steps:

  1. We will go through the listing section by section. First, we input the base layers Input and Dense, then Model, all from the tensorflow.keras module, with the following imports:
from tensorflow.keras.layers import Input, Dense
from tensorflow.keras.models import Model

  1. Instead of single neurons, we define our DL model in Keras using layers or neurons. The Input and Dense layers are the most common ones we use, but we will see others as well. As their name suggests, Input layers deal with input, while Dense layers are more or less your typical fully connected neuron layer, which we have already looked at.
We are using the embedded version of Keras here. The original sample was taken from the Keras blog and converted to TensorFlow.
  1. Next, we set the number of encoding dimensions with the following line:
encoding_dim = 32
  1. This is the number of dimensions we want to reduce our sample down to. In this case, it is just 32, which is just around 24 times the compression for an image with 784 input dimensions. Remember, we get 784 input dimensions because our input images are 28 x 28, and we flatten them to a vector of length 784, with each pixel representing a single value or dimension. Next, we set up the Input layer with the 784 input dimensions with the following:
input_img = Input(shape=(784,))
  1. That line creates an Input layer with a shape of 784 inputs. Then we are going to encode those 784 dimensions into our next Dense layer using the following line:
encoded = Dense(encoding_dim, activation='ReLU')(input_img)
encoder = Model(input_img, encoded)
  1. The preceding code simply creates our fully connected hidden (Dense) layer of 32 (encoding_dim) neurons and builds the encoder. You can see that the input_img, the Input layer, is used as input and our activation function is ReLU. The next line constructs a Model using the Input layer (input_img) and the Dense (encoded) layer. With two layers, we encode the image from 784 dimensions to 32.
  1. Next, we need to decode the image using more layers with the following code:
decoded = Dense(784, activation='sigmoid')(encoded)
autoencoder = Model(input_img, decoded)
encoded_input = Input(shape=(encoding_dim,))

decoder_layer = autoencoder.layers[-1]
decoder = Model(encoded_input, decoder_layer(encoded_input))

autoencoder.compile(optimizer='adadelta', loss='binary_crossentropy')
  1. The next set of layers and model we build will be used to decode the images back to 784 dimensions. The last line of code at the bottom is where we compile the autoencoder model with an adadelta optimizer call, using a loss function of binary_crossentropy. We will spend more time on the types of loss and optimization parameters later, but for now just note that when we compile a model, we are in essence just setting it up to do backpropagation and use an optimization algorithm. Remember, all of this is automatically done for us, and we don't have to deal with any of that nasty math.

    That sets up the main parts of our models, the encoder, decoder, and full autoencoder model, which we further compiled for later training. In the next section, we deal with training the model and making predictions.

    ..................Content has been hidden....................

    You can't read the all page of ebook, please click here login for view all page.
    Reset