Defining the generator model

The generator model is the neural network that creates synthetic target data out of random inputs. In this case, we will use a convolutional neural network (CNN) in reverse. What this means is that we will start with a vector of data points and create a fully connected layer, then reshape the data into the size that we want it to be. As a middle step, we will make the target shape only half the size and then we will upsample using a transposed convolution layer. In the end, we have an array of normalized pixel values that is the same shape as our target array. This then becomes the data object that will be used to try to fool the discriminator model. This array of synthetic values will, over time, be trained to resemble the values in the target data object so that the discriminator model cannot predict, with a high probability, which is the true data image. We will define the discriminator model using the following steps: 

  1. First, we will define that our entry point will be a 100-dimensional vector. Everything we do to define our models will be done with Keras. So, we load the Keras model at this step. We then define our input shape as a vector with 100 values.
  2. Then, we will pass a vector to this model. In this step, we tell the model what we will be passing in the later step. Using the following code, we declare that the input to this model will be a vector with 100 values that we will later populate with random values:
library(keras)
generator_in <- layer_input(shape = c(100))
  1. After running this step, we can see that we have a special data object, called Tensor, in our data environment. The object contains the type, name, shape, and data type of the layer. Your data environment will look as in the following screenshot:

  1. After this, we define how our random values will be processed, transformed, and reshaped to create a synthetic array that matches our target arrays. The code to do this is long, but many of the parts are repeated. There are a few lines that are required while others can be modified. The layer_dense layer needs to contain the number of units that will appear later in the layer_reshape layer. In this case, we will create a shape that has a width and height of 25 and a depth of 128. The depth is modifiable; however, the width and height must be set at half the size of the final image's dimensions when using one transposed convolution layer, as follows:
generator_out <- generator_in %>%
layer_dense(units = 128 * 25 * 25) %>%
layer_reshape(target_shape = c(25, 25, 128))
  1. The layer_conv_2d_transpose layer uses a 2 x 2 stride to upsample and double the shape of the layer. In this step, the shape changes from 25 x 25 to 50 x 50:
generator_out <- generator_in %>%
layer_dense(units = 128 * 25 * 25) %>%
layer_reshape(target_shape = c(25, 25, 128)) %>%
layer_conv_2d(filters = 512, kernel_size = 5,
padding = "same")
  1. The convolution applies filters that look for patterns and the normalization takes the results of the convolution step and normalizes the values. So the mean is close to 0 and the standard deviation is close to 1, and ReLU is used as our activation function. We will add these layers after our dense layer and our convolution layer using the following code:
generator_out <- generator_in %>%
layer_dense(units = 128 * 25 * 25) %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_reshape(target_shape = c(25, 25, 128)) %>%
layer_conv_2d(filters = 512, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu()
  1. After this, we can continue to add additional convolution layers using the same pattern of convolution, normalization, and activation. Here, we will add four additional series of layers using the pattern we just described:
generator_out <- generator_in %>%
layer_dense(units = 128 * 25 * 25) %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_reshape(target_shape = c(25, 25, 128)) %>%
layer_conv_2d(filters = 512, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d_transpose(filters = 256, kernel_size = 4,
strides = 2, padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d(filters = 256, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d(filters = 128, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d(filters = 64, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu()
  1. In the very last step, the filters argument needs to be set to the number of channels for the imageā€”in this case, three for the red, green, and blue channels of a color image. This completes the definition of our generator model. The entire generator model is defined using the following code:
generator_out <- generator_in %>%
layer_dense(units = 128 * 25 * 25) %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_reshape(target_shape = c(25, 25, 128)) %>%
layer_conv_2d(filters = 512, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d_transpose(filters = 256, kernel_size = 4,
strides = 2, padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d(filters = 256, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d(filters = 128, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d(filters = 64, kernel_size = 5,
padding = "same") %>%
layer_batch_normalization(momentum = 0.5) %>%
layer_activation_relu() %>%
layer_conv_2d(filters = 3, kernel_size = 7,
activation = "tanh", padding = "same")
  1. After running this code, we will now see two objects in our environment. We have defined the connected tensors for input and the connected input for the output. Setting up our tensors in this way allows data to be input in batches using the keras_model function. Your data environment should look like the following now:

  1. After, we define that the input will be 100 random values and the output will be random values mapped to a data object with the same dimensions as our target image. 
  2. We can then define keras_model, which takes the input and output as arguments, specifically. We pass in these defined tensor layers, at this point, to complete the definition of our model.
  3. After defining the model, we can run the summary function on the generator model to helpfully see what is happening to the data at each layer. We define our generator and view the summary using the following code:
generator <- keras_model(generator_in, generator_out)
summary(generator)
  1. After running the summary function, we will see details about our model printed to our console, which looks like this:


  1. From the console output, we can see that we start with one fully connected layer and after numerous intermediate layers, we end up with a final layer that matches the shape of our target image data.

We now have our generator completely defined. We have seen how we can insert random values and how those random values are then transformed to produce a synthetic image. The process of passing data to this model occurs later in the process. With a system in place to produce fake images, we now move on to defining the discriminator model, which will determine whether a given array of pixel data is a real or fake image.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset