Sampling from the generator

In the previous section, we went through some examples that were generated during the training process of this GAN architecture. We can also generate completely new images from the generator by loading the checkpoints that we have saved and feeding the generator with a new latent space that it can use to generate new images:

# Sampling from the generator
saver = tf.train.Saver(var_list=g_vars)

with tf.Session() as sess:

#restoring the saved checkpints
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
gen_sample_z = np.random.uniform(-1, 1, size=(16, z_size))
generated_samples = sess.run(
generator(generator_input_z, input_img_size, reuse_vars=True),
feed_dict={generator_input_z: gen_sample_z})
view_generated_samples(0, [generated_samples])
Figure 9: Samples from the generator

There are some observations that you can come up with while implementing this example. During the first epochs of the training process, the generator doesn't have any skills to produce similar images to the real one because it doesn't know what they look like. Even the discriminator doesn't know how to distinguish between fake images made by the generator and the. At the beginning of training, two interesting situations occur. First, the generator does not know how to create images like the real ones that we fed originally to the network. Second, the discriminator doesn't know the difference between the real and fake images.

Later on, the generator starts to fake images that make sense to some extent, and that's because the generator will learn the data distribution that the original input images are coming from. In parallel, the discriminator will be able to distinguish between fake and real images and it will be fooled by the end of the training process.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset