Introducing autoencoders

An autoencoder is a network with three or more layers, where the input layer and the output have the same number of neurons, and those intermediate (hidden) layers have a lower number of neurons. The network is trained to simply reproduce in output, for each input data, the same pattern of activity in the input.

The remarkable aspect of the problem is that, due to the lower number of neurons in the hidden layer, if the network can learn from examples, and can generalize to an acceptable extent, it performs data compression: the status of the hidden neurons provide, for each example, a compressed version of the input and output common states.

In the first examples of such networks, in the 1980s, a compression of simple images was obtained in this way. This was not far for services to that obtainable with standard methods and more complicated.

Interest in autoencoders was recently revived by some authors who have developed an efficient strategy to improve the learning process in this type of network (usually very slow and not always effective), through a prelearning procedure that provides a good initial condition of the weights for the learning procedure.

See the paper by G.E. Hinton and R. Salakhutdinov, Reducing the Dimensionality of Data with Neural Networks, 2006. Please refer  https://www.cs.toronto.edu/~hinton/science.pdf

Useful applications of autoencoders are data denoising and dimensionality reduction for data visualization.

The following figure shows how an autoencoder typically works: it reconstructs the received input through two phases, an encoding phase, that corresponds to a dimensional reduction for the original input, and a decoding phase, capable of reconstructing the original input from the encoded (compressed) representation:

Encoder and decoder phase in autoencoder
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset