Optimizer

Finally, we need to use an optimization method that will help us learn something from the dataset. As we know, vanilla RNNs have exploding and vanishing gradient issues. LSTMs fix only one issue, which is the vanishing of the gradient values, but even after using LSTM, some gradient values explode and grow without bounds. In order to fix this problem, we can use something called gradient clipping, which is a technique to clip the gradients that explode to a specific threshold.

So, let's define our optimizer by using the Adam optimizer for the learning process:

def build_model_optimizer(model_loss, learning_rate, grad_clip):

# define optimizer for training, using gradient clipping to avoid the exploding of the gradients
trainable_variables = tf.trainable_variables()
gradients, _ = tf.clip_by_global_norm(tf.gradients(model_loss, trainable_variables), grad_clip)

#Use Adam Optimizer
train_operation = tf.train.AdamOptimizer(learning_rate)
model_optimizer = train_operation.apply_gradients(zip(gradients, trainable_variables))

return model_optimizer
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset