Discriminator

Next up, we will build the second main component in the generative adversarial network, which is the discriminator. The discriminator is pretty much the same as the generator, but instead of using the tanh activation function, we will be using the sigmoid activation function; it will produce a binary output that will represent the judgment of the discriminator on the input image:

def discriminator(disc_input, num_hiddern_units=128, reuse_vars=False, leaky_relu_alpha=0.01):
''' Building the discriminator part of the network

Function Arguments
---------
disc_input : discrminator input tensor
num_hiddern_units : Number of neurons/units in the hidden layer
reuse_vars : Reuse variables with tf.variable_scope
leaky_relu_alpha : leaky ReLU parameter

Function Returns
-------
sigmoid_out, logits_layer:
'''
with tf.variable_scope('discriminator', reuse=reuse_vars):

# Defining the generator hidden layer
hidden_layer_1 = tf.layers.dense(disc_input, num_hiddern_units, activation=None)

# Feeding the output of hidden_layer_1 to leaky relu
hidden_layer_1 = tf.maximum(hidden_layer_1, leaky_relu_alpha*hidden_layer_1)

logits_layer = tf.layers.dense(hidden_layer_1, 1, activation=None)
sigmoid_out = tf.nn.sigmoid(logits_layer)

return sigmoid_out, logits_layer
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset