Adding more hidden layers to the networks

We have just achieved 91.3% accuracy with a single-layer neural network model. Theoretically, we can obtain a better one with more than one hidden layer. As an example, we provide a solution of a deep neural network model with two hidden layers:

Weight optimization in feed-forward deep neural networks is also realized through the backpropagation algorithm, which is identical to single-layer networks. However, the more layers, the higher the computation complexity, and the slower the model convergence. One way to accelerate the weight optimization, is to use a more computational efficient activation function. The most popular one in recent years is the rectified linear unit (ReLU):

A plot of the ReLU function is as follows:

Thanks to the properties of its derivative, which is , there are two main advantages of using the ReLU activation function over sigmoid:

  • Faster learning because of constant value of relu'(z), compared to that of the logistic function .
  • Less likely to have the vanishing gradient problem, exponential decrease of gradient, which can be found in networks with multiple stacked sigmoid layers. As we multiply the derivative of the activation function when calculating errors δ for each layer, and the maximal value of sigmoid'(z) is ¼, the gradients will decrease exponentially as we stack more and more sigmoid layers.

The nnet package we used in previous sections is (by now) only capable of modeling a single-layer network. In this chapter, we use the MXNet package to implement deep neural networks with multiple hidden layers. MXNet (https://mxnet.incubator.apache.org/) is a deep learning framework that supports programming languages include R, Scala, Python, Julia, C++, and Perl. It is developed by the DMLC (http://dmlc.ml/) team, a group of experts collaborating on open-source machine learning projects. It is portable and can scale to multiple CPUs, multiple GPUs and multiple machines, for example, in the cloud. Most importantly, it allows us to flexibly and efficiently construct state-of-the-art deep learning models, including deep neural networks, CNNs and RNNs.

Let's install MXNet first:

> cran <- getOption("repos")
> cran["dmlc"] <- "https://s3-us-west-2.amazonaws.com/apache-mxnet/R/CRAN/"
> options(repos = cran)
> if (!require("mxnet"))
install.packages("mxnet")

Now we can import MXNet and convert the data into the format preferred by the neural network models in MXNet:

> require(mxnet)
> data_train <- data.matrix(data_train)
> data_train.x <- data_train[,-1]
> data_train.x <- t(data_train.x/255)
> data_train.y <- data_train[,1]

Note we scale the input features to a range from 0 to 1, by dividing the maximal possible value 255. Otherwise, the deep neural networks may be skewed towards some features and such skewness will accumulate over layers.

Now that the training dataset is ready, we can start constructing the network by defining its architecture as follows:

> data <- mx.symbol.Variable("data")
> fc1 <- mx.symbol.FullyConnected(data, name="fc1", num_hidden=128)
> act1 <- mx.symbol.Activation(fc1, name="relu1", act_type="relu")
> fc2 <- mx.symbol.FullyConnected(act1, name="fc2", num_hidden=64)
> act2 <- mx.symbol.Activation(fc2, name="relu2", act_type="relu")
> fc3 <- mx.symbol.FullyConnected(act2, name="fc3", num_hidden=10)
> softmax <- mx.symbol.SoftmaxOutput(fc3, name="sm")

In the MXNet's Symbol API, we represent the network in the data type symbol. We begin with the input layer data, the input data, and follow up with the first hidden layer fc1 with 128 nodes, which fully connects with the input layer. We then attach the ReLU function to fc1 and output the activations act1 for this layer. Similarly, we chain another hidden layer fc2, with 64 nodes this time, and output ReLU-based activates act2. Finally, we end up with the output layer with a softmax function, generating 10 probabilities corresponding to 10 classes. The overall structure looks like this:

After building the bone, it is time to train the model. We can choose our computation device, CPU and/or GPU—here is a CPU example:

> devices <- mx.cpu()

Before training, don't forget to set the random seed to make the modeling process reproducible:

> mx.set.seed(42)
> model_dnn <- mx.model.FeedForward.create(softmax, X=data_train.x,
y=data_train.y, ctx=devices, num.round=30, array.batch.size=100,
learning.rate=0.01, momentum=0.9, eval.metric=mx.metric.accuracy,
initializer=mx.init.uniform(0.1),
epoch.end.callback=mx.callback.log.train.metric(100))
Start training with 1 devices
[1] Train-accuracy=0.724793650793651
[2] Train-accuracy=0.904715189873417
[3] Train-accuracy=0.925537974683544
[4] Train-accuracy=0.939936708860759
[5] Train-accuracy=0.950379746835443
[6] Train-accuracy=0.95873417721519
[7] Train-accuracy=0.96509493670886
[8] Train-accuracy=0.969905063291139
[9] Train-accuracy=0.974303797468355
[10] Train-accuracy=0.977784810126584
[11] Train-accuracy=0.980696202531648
[12] Train-accuracy=0.983164556962027
[13] Train-accuracy=0.985284810126584
[14] Train-accuracy=0.987405063291141
[15] Train-accuracy=0.988924050632913
[16] Train-accuracy=0.990727848101267
[17] Train-accuracy=0.992088607594938
[18] Train-accuracy=0.993227848101268
[19] Train-accuracy=0.994398734177217
[20] Train-accuracy=0.995284810126584
[21] Train-accuracy=0.995854430379748
[22] Train-accuracy=0.996835443037975
[23] Train-accuracy=0.997183544303798
[24] Train-accuracy=0.997848101265823
[25] Train-accuracy=0.998164556962026
[26] Train-accuracy=0.998575949367089
[27] Train-accuracy=0.998924050632912
[28] Train-accuracy=0.999177215189874
[29] Train-accuracy=0.999367088607595
[30] Train-accuracy=0.999525316455696

We just fit the model with hyperparameters including:

  • num.round = 30: The maximum number of iterations is set to be 30.
  • array.batch.size = 100: The batch size of the mini-batch gradient descent is 100. As a variation of a stochastic gradient descent, the mini-batch gradient descent algorithm calculates costs and gradients by small batches, instead of individual training samples. Hence, it is computationally more efficient and allows faster model convergence. As a result, the mini-batch gradient descent is more commonly used in training deep neural networks.
  • learning.rate = 0.01: The learning rate is 0.01.
  • momentum=0.9: In general, the cost function of deep architectures has the form of one or more shallow ravines (local minima) leading to the global optimum. Momentum as seen in the physical law of motion is employed to avoid getting stuck in sub-optimum and make the convergence faster. With momentum, weights are updated as follows:

where the left and right v is the previous and current velocity respectively, and γ ∈ (0,1] is the momentum factor determining how much of the previous velocity is incorporated into the current one.

  • eval.metric=mx.metric.accuracy: It uses classification accuracy as the evaluation metric
  • initializer=mx.init.uniform(0.1): Initial weights are randomly generated from the uniform distribution ranging from 0 to 1, so as to lower the chances of the weight exploding and vanishing in the deep network

After the model is trained, let's see how it performs on the testing set. First, remember to conduct the same pre-processing on the test dataset:

> data_test.x <- data_test[,-1]
> data_test.x <- t(data_test.x/255)

Then, predict the testing cases and evaluate the performance:

> prob_dnn <- predict(model_dnn, data_test.x)
> prediction_dnn <- max.col(t(prob_dnn)) - 1
> cm_dnn = table(data_test$label, prediction_dnn)
> cm_dnn
   prediction_dnn
       0    1    2    3    4    5    6    7    8    9
  0 1041    0    2    0    0    1    3    0    8    1
  1    0 1157    3    1    1    0    1    3    1    0
  2    2    1  993    3    3    1    2   13    5    2
  3    1    3   14 1033    1   13    0    5   14    6
  4    0    2    1    0  991    0    4    4    1   12
  5    4    2    3   12    3  892    4    3    6    8
  6   10    0    1    0    3    4  988    0    4    0
  7    0    5    9    1    2    0    0 1116    2    1
  8    4    8    3    5    0    8    3    2 1020   12
  9    1    1    0    4   13    3    0   16    2  957
> accuracy_dnn = mean(prediction_dnn == data_test$label)
> accuracy_dnn
[1] 0.9704706 

By adding one more hidden layer, accuracy is improved from 91.4% to 97.0%! Since each hidden layer in a deep neural network provides representations of the data at a certain level, can we simply conclude that the more hidden layers (such as 100, 1,000, 10,000...), the more underneath patterns are discovered, the better the classification accuracy? It might be true if we have plentiful resources and time to enable computation and to make sure overfitting does not occur with such complex networks. Is there any way where we can extract richer and more informative representations than by simply chaining more hidden layers, and at the same time, not excessively grow our networks? The answer is CNNs.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset