Benefits and limitations

The advantages and disadvantages of neural networks depend on which other machine learning methods they are compared to. However, neural-network-based classifiers, particularly the multilayer perceptron using the error backpropagation, have some obvious advantages, which are as follows:

  • The mathematical foundation of a neural network does not require expertise in dynamic programming or linear algebra, beyond the basic gradient descent algorithm.
  • A neural network can perform tasks that a linear algorithm cannot.
  • An MLP is usually reliable for highly dynamic and nonlinear processes. Contrary to the support vector machines, they do not require us to increase the problem dimension through kernelization.
  • An MLP does not make any assumption on linearity, variable independence, or normality.
  • The execution of training of an MLP lends itself to concurrent processing quite well for online training. In most architecture, the algorithm can continue even if a node in the network fails (refer to the Apache Spark section in Chapter 12, Scalable Frameworks).

However, as with any machine learning algorithm, neural networks have their detractors. The most documented limitations are as follows:

  • MLP models are black boxes for which the association between features and classes may not be easily described and understood.
  • An MLP requires a lengthy training process, especially using the batch training strategy. For example, a two-layer network has a time complexity (number of multiplications) of O(n.m.p.N.e) for n input variables, m hidden neurons, p output values, N observations, and e epochs. It is not uncommon that a solution emerges after thousands of epochs. The online training strategy using a momentum factor tends to converge faster and requires a smaller number of epochs than the batch process.
  • Tuning the configuration parameters, such as optimization of the learning rate and momentum factors, selection of the most appropriate activation method, and the cumulative error formula can turn into a lengthy process.
  • Estimating the minimum size of the training set required to generate an accurate model and limiting the computation time is not obvious.
  • A neural network cannot be incrementally retrained. Any new labeled data requires the execution of several training epochs.

Note

Other types of neural networks

This chapter covers the multilayer perceptron and introduces the concept of a convolution neural network. There are many more types of neural networks, such as recurrent networks and mixture density networks.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset