Introducing feed-forward neural networks

A feed-forward neural network (ffnn) consists of a large number of neurons, organized in layers: one input layer, one or more hidden layers, and one output layer. Each neuron is connected to all the neurons of the previous layer; the connections are not all the same, because they have a different weight. The weights of these connections encode the knowledge of the network.

Data enters at the inputs and passes through the network, layer by layer, until it arrives at the outputs; during this operation there is no feedback between layers.

Therefore, these networks are called feed-forward neural networks.

An ffnn with enough neurons in the hidden layer is able to approximate with arbitrary precision:

  • Any continuous function, with one hidden layer
  • Any function, even discontinuous, with two hidden layers

However, it is not possible to determine a priori, with adequate precision, the required number of hidden layers, or even the number of neurons that must be contained inside it to compute a non-linear function. Still, despite some rules of thumb, it relies on the experience and on some heuristics to determine the structure of the network.

If the neural network architecture is constituted by a low number of hidden layers or neurons, the network is not able to approximate with adequate precision the unknown function, because this is too complex or because the backpropagation algorithm falls within a local minimum. If the network is composed of a high number of hidden layers, we have an over-fitting problem, namely a worsening of the network's generalization ability.

Feedforward neural network with two hidden layers and an input bias
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset