The backpropagation algorithm

A supervised learning algorithm used is the backpropagation algorithm.

The basic steps of the training procedure are as follows:

  1. Initialize the net with random weights.
  2. For all training cases:
    • Forward pass: Calculates the error committed by the net, the difference between the desired output and the actual output.
    • Backward pass: For all layers, starting with the output layer, back to the input layer.
  3. Show the network layer output with correct input (error function).
  4. Adapt weights in the current layer to minimize the error function. This is the backpropagation's optimization step. The training process ends when the error on the validation set begins to increase, because this could mark the beginning of a phase of over-fitting of the network, that is, the phase in which the network tends to interpolate the training data at the expense of generalization ability.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset