Summary

In this chapter, we have walked the reader toward an understanding of what deep learning is and how it is related to deep neural networks. We have also discussed how many different implementations of deep neural networks exist, besides the classical feed-forward implementation, and have discussed the recent successes deep learning has had on many standard classification tasks. This chapter has been rich with concepts and ideas, developed through examples and historical remarks from the Jacquard loom to the Ising model. This is just the beginning, and we will work out many examples in which the ideas introduced in this chapter will be explained and developed more precisely.

We are going to start this process in the coming chapter, where we will finally introduce the readers to many of the concepts we have touched on in this one, like RBMs and auto-encoders, and it will be clear how we can create more powerful deep neural networks than simple feed-forward DNNs. In addition, it will also be clear how the concept of representations and features arise naturally in these particular neural networks. From the last example, using cifar10, it is clear that classical feed-forward DNNs are difficult to train on more complex datasets, and we need a better way to set the weight parameters. X. Glorot and Y. Bengio, Understanding the difficulty of training deep feed-forward neural networks, in Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10), (2010) (http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf), treats the issue of poor performance of deep neural networks trained with gradient descent on random weight initialization. New algorithms that can be used to train deep neural networks successfully will be introduced and discussed in the next chapter.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset