Model inputs

Before diving into building the core of the GAN, which is represented by the generator and discriminator, we are going to define the inputs of our computational graph. As shown in Figure 2, we need two inputs. The first one will be the real images, which will be fed to the discriminator. The other input is called latent space, which will be fed to the generator and used to generate its fake images:

# Defining the model input for the generator and discrimator
def inputs_placeholders(discrimator_real_dim, gen_z_dim):
real_discrminator_input = tf.placeholder(tf.float32, (None, discrimator_real_dim), name="real_discrminator_input")
generator_inputs_z = tf.placeholder(tf.float32, (None, gen_z_dim), name="generator_input_z")

return real_discrminator_input, generator_inputs_z
Figure 3: Architecture of the MNIST GAN implementation

Now it's time to dive into building the two core components of our architecture. We will start by building the generator part. As shown in Figure 3, the generator will consist of at least one hidden layer, which will work as an approximator. Also, instead of using the normal ReLU activation function, we will use something called a leaky ReLU. This will allow the gradient values to flow through the layer without any constraints (more in the next section about leaky RelU).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset