Leaky ReLU

In ReLU, a negative value for x results in a zero value for y. It means that some information is lost in the process, which makes training cycles longer, especially at the start of training. The Leaky ReLU activation function resolves this issue. The following applies for Leaky ReLu:

; for

for

This is shown in the following diagram:

Here, ß is a parameter with a value less than one.

It can be implemented in Python as follows:

def leakyReLU(x,beta=0.01):
if x<0:
return (beta*x)
else:
return x

There are three ways of specifying the value for ß: 

  • We can specify a default value of ß.
  • We can make ß a parameter in our neural network and we can let the neural network decide the value (this is called parametric ReLU).
  • We can make ß a random value (this is called randomized ReLU).
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset