Applying dropout operations with TensorFlow

If we apply the dropout operation to a sample vector, it will work on transmitting the dropout to all the architecture-dependent units. In order to apply the dropout operation, TensorFlow implements the tf.nn.dropout method, which works as follows:

tf.nn.dropout (x, keep_prob, noise_shape, seed, name)

Where x is the original tensor. The keep_prob means the probability of keeping a neuron and the factor by which the remaining nodes are multiplied. The noise_shape signifies a four-element list that determines whether a dimension will apply zeroing independently or not. Let's have a look at this code segment:

import tensorflow as tf X = [1.5, 0.5, 0.75, 1.0, 0.75, 0.6, 0.4, 0.9] 
drop_out = tf.nn.dropout(X, 0.5)
sess = tf.Session() with sess.as_default():
print(drop_out.eval())
sess.close()
[ 3. 0. 1.5 0. 0. 1.20000005 0. 1.79999995]

In the preceding example, you can see the results of applying dropout to the x variable, with a 0.5 probability of zero; in the cases in which it didn't occur, the values were doubled (multiplied by 1/1.5, the dropout probability).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset