Learning paradigms

There are basically two types of learning for neural networks, namely supervised and unsupervised. The learning in the human mind, for example, also works in this way. We can learn from observations without any kind of target pattern (unsupervised), or we can have a teacher who shows us the right pattern to follow (supervised). The difference between these two paradigms relies mainly on the relevance of a target pattern and varies from problem to problem.

Supervised learning

This category of learning deals in pairs of X's and Y's, and the objective is to map them in a function f: X → Y. Here, the Y data is the supervisor, the target desired outputs, and the X data is the source-independent data that generates the Y data. It is analogous to a teacher who is teaching somebody a certain task to be performed, as shown in the following figure:

Supervised learning

One particular feature of this learning paradigm is that there is a direct error reference, which is just the comparison between the target and the current actual result. The network parameters are fed into a cost function, which quantifies the mismatch between the desired and the actual outputs.

Tip

A cost function is just a measurement to be minimized in an optimization problem. That means that one seeks to find the parameters that drive the cost function to the lowest possible value.

The cost function will be covered in detail further in this chapter.

Supervised learning is very suitable for tasks that already provide a pattern, a goal to be reached. Some examples are as follows: classification of images, speech recognition, function approximation, and forecasting. Note that the neural network should be provided previous knowledge of both input-independent values (X) and the output classification-dependent values (Y). The presence of a dependent output value is a necessary condition for the learning to be supervised.

Unsupervised learning

As illustrated in the following figure, in unsupervised learning, we deal only with data without any labeling or classification; instead, our neural structure tries to draw inferences and extract knowledge by taking into account only the input data X.

Unsupervised learning

This is analogous to self-learning, when someone learns by him/herself taking into account his/her experience and a set of supporting criteria. In unsupervised learning, we don't have a defined desired pattern to be applied on each observation, but the neural structure can produce one by itself without any supervising need.

Tip

Here, the cost function plays an important role. It will strongly affect all the neural properties as well as the relation between the input data.

Examples of tasks that unsupervised learning can be applied to are as follows: clustering, data compression, statistical modeling, and language modeling. This learning paradigm will be covered in more detail in Chapter 4, Self-Organizing Maps.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset