Transfer functions

Each neuron receives as input signal the weighted sum of the synaptic weights and the activation values of the neurons connected to it. To allow the neuron to calculate its activation value, that is, what the neuron retransmits, the weighted sum must be passed as the argument of the transfer function. The transfer function allows the receiving neuron to transmit the received signal modifying it.

One of the most used functions for this purpose is the so-called sigmoid function:

The domain of this function includes all real numbers and the co-domain is (0, 1). This means that any value will obtain, as output from a neuron per the calculation of its activation state, it will always be between 0 and 1.

The sigmoid function, as represented in the following diagram, provides an interpretation of the saturation rate of a neuron, from not being active (= 0), to its complete saturation, which occurs at a predetermined maximum value (=1):

Sigmoid function

When new data has to be analyzed, it is loaded by the input layer, which through (a) or (b) generates an output. This result, together with the output from neurons of the same layer, will form a new input to the neurons on the next layers. The process will be iterated until the last layer. In general, in the last level of an ffnn the softmax function is adopted. It comes in handy whenever we want to interpret the output of a network as a posteriori probability.

The softmax function is denoted as follows:

Here, N represents the total number of outputs from net.

Also, the following important properties are valid for the softmax function:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset