Batch normalization

Batch normalization is a technique for improving the performance and stability of neural networks. The idea is to normalize the layer inputs so that they have a mean of zero and variance of 1. Batch normalization was introduced in Sergey Ioffe's and Christian Szegedy's 2015 paper, Batch Normalization is Necessary to Make DCGANs Work. The idea is that instead of just normalizing the inputs to the network, we normalize the inputs to layers within the network. It's called batch normalization because during training, we normalize each layer's input by using the mean and variance of the values in the current mini-batch.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset