Summary

We have seen how to implement a feed-forward neural network (ffnn) architecture for an image classification problem.

An ffnn is characterized by a set of input units, a set of output units, and one or more hidden units that connect the input level from that output. The connections between the levels are total and in a single direction: each unit receives a signal from all the units of the previous layer and transmits its output value, suitably weighed to all units of the next layer. For each layer a transfer function (sigmoid, softmax, ReLU) must be defined: the choice of the transfer function depends on the architecture and then the addressed problem.

Then we implemented four different ffnn models, the first model with a single hidden layer with softmax activation function, and then three other more complex models, with five hidden layers in total, but with different activation functions:

  • Four sigmoid layers and one softmax layer
  • Four ReLu layers and one softmax layer
  • Four ReLu layers with dropout optimization and one softmax layer

In the next chapter, we will go even further in the complexity of the neural network models introducing the Convolutional Neural Networks (CNNs), which may have a big impact in the deep learning techniques: we will study the main features and we will see some implementation examples.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset