Summary

In this chapter, we implemented some optimizing networks called autoencoders. An autoencoder is basically a data compression network model. It is used to encode a given input into a representation of smaller dimensions, and then a decoder can be used to reconstruct the input back from the encoded version. All the autoencoders we implemented contain an encoding and a decoding part.

We also saw how to improve the autoencoders' performance by introducing noise during the network training and building a denoising autoencoder. Finally, we applied the concepts of CNNs introduced in Chapter 4, TensorFlow on a Convolutional Neural Network with the implementation of convolutional autoencoders.

Even when the number of hidden units is large, we can still discover the interesting and hidden structure of the dataset using autoencoders by imposing other constraints on the network. In other words, if we impose a sparsity constraint on the hidden units, then the autoencoder will still discover interesting structure in the data, even if the number of hidden units is large. To prove this argument, we saw a real-life example, credit card fraud analytics, in which we successfully applied an autoencoder.

A Recurrent Neural Network (RNN) is a class of artificial neural network where the connections between units form a directed cycle. RNNs make use of information from the past such as time series forecasting. That way, they can make predictions about data with high temporal dependencies. This creates an internal state of the network, which allows it to exhibit dynamic temporal behavior.

In the next chapter, we'll examine RNNs. We will start by describing the basic principles of these networks, and then we'll implement some interesting examples of these architectures.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset