Calculating loss 

For the discriminator, the total loss is the sum of the losses for real and fake images. The losses will be sigmoid cross-entropyies, which we can get using the TensorFlow tf.nn.sigmoid_cross_entropy_with_logits. Then we compute the mean for all the images in the batch. So the losses will look like this:

tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))

To help the discriminator generalize better, the labels can be reduced a bit from 1.0 to 0.9, by for example, using the parameter smoothThis is known as label smoothing, and is typically used with classifiers to improve performance. The discriminator loss for the fake data is similar. The logits are d_logits_fake, which we got from passing the generator output to the discriminator. These fake logits are used with labels of all zeros. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.

Finally, the generator losses are using d_logits_fakethe fake image logits. But now the labels are all 1s. The generator is trying to fool the discriminator, so it wants the discriminator to output ones for fake images:

# Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.zeros_like(d_logits_real)))
d_loss = d_loss_real + d_loss_fake

g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake,
labels=tf.ones_like(d_logits_fake)))
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset