Optimizing TensorFlow Autoencoders

A big problem that plagues all supervised learning systems is the so-called curse of dimensionality; a progressive decline in performance with an increase in the input space dimension. This occurs because the number of necessary samples to obtain a sufficient sampling of the input space increases exponentially with the number of dimensions. To overcome these problems, some optimizing networks have been developed.

The first are autoencoder networks, these are designed and trained for transforming an input pattern in itself, so that, in the presence of a degraded or incomplete version of an input pattern, it is possible to obtain the original pattern. The network is trained to create output data, like those presented in the entrance, and the hidden layer stores the data compressed, that is, a compact representation that captures the fundamental characteristics of the input data.

The second optimizing networks are Boltzmann machines. These types of networks consist of an input/output visible layer, and one hidden layer. The connections between the visible layer and the hidden one are non-directional: data can travel in both directions, visible-hidden and hidden-visible, and the different neuronal units can be fully or partially connected.

Autoencoders can be compared with Principal Component Analysis (PCA), which is used to represent a given input using fewer dimensions than originally present. In this chapter, we'll focus only on autoencoders.

The topics covered are:

  • Introducing autoencoders
  • Implementing an autoencoder
  • Improving autoencoder robustness
  • Building denoising autoencoders
  • Convolutional autoencoders
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset