Chapter 10. Current Trends in Neural Networks

This final chapter shows the reader the most recent trends in neural networks. Although this book is introductory, it is always useful to be aware of the latest developments and where the science behind this theory is going to. Among the latest advancements is the so-called deep learning, a very popular research field for many data scientists; this type of network is briefly covered in this chapter. Convolutional and cognitive architectures are also in this trend and gaining popularity for multimedia data recognition. Hybrid systems that combine different architectures are a very interesting strategy for solving more complex problems, as well as applications that involve analytics, data visualization, and so on. Being more theoretical, there is no actual implementation of the architectures, although an example of implementation for a hybrid system is provided. Topics covered in this chapter include:

  • Deep learning
  • Convolutional neural networks
  • Long short term memory networks
  • Hybrid systems
  • Neuro-Fuzzy
  • Neuro-Genetic
  • Implementation of a hybrid neural network

Deep learning

One of the latest advancements in neural networks is the so-called deep learning. Nowadays it is nearly impossible to talk about neural networks without mentioning deep learning, because the recent research on feature extraction, data representation, and transformation has found that many layers of processing information are able to abstract and produce better representations of data for learning. Throughout this book we have seen that neural networks require input data in numerical form, no matter if the original data is categorical or binary, neural networks cannot process non-numerical data directly. But it turns out that in the real world most of the data is non-numerical or is even unstructured, such as images, videos, audios, texts, and so on.

In this sense a deep network would have many layers that could act as data processing units to transform this data and provide it to the next layer for subsequent data processing. This is analogous to the process that happens in the brain, from the nerve endings to the cognitive core; in this long path the signals are processed by multiple layers before resulting in signals that control the human body. Currently, most of the research on deep learning has been on the processing of unstructured data, particularly image and sound recognition and natural language processing.

Tip

Deep learning is still under development and much has changed since 2012. Big companies such as Google and Microsoft have teams for research on this field and much is likely to change in the next couple of years.

A scheme of a deep learning architecture is shown in the following figure:

Deep learning

On the other hand, deep neural networks have some problems that need to be overcome. The main problem is overfitting. The many layers that produce new representations of data are very sensitive to the training data, because the deeper the signals reach in the neural layers, the more specific the transformation will be for the input data. Regularization methods and pruning are often applied to prevent overfitting. Computation time is another common issue in training deep networks. The standard backpropagation algorithm can take a very long time to train a deep neural network, although strategies such as selecting a smaller training dataset can speed up the training time. In addition, in order to train a deep neural network, it is often recommended to use a faster machine and parallelize the training as much as possible.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset