Practical example 1 – the XOR case with delta rule and backpropagation

Now let's see the multilayer perceptron in action. We coded the example XORTest.java, which basically creates two neural networks with the following features:

Neural Network

Perceptron

Multi-layer Percepetron

Inputs

2

2

Outputs

1

1

Hidden Layers

0

1

Hidden Neurons in each layer

0

2

Hidden Layer Activation Function

Non

Sigmoid

Output Layer Activation Function

Linear

Linear

Training Algorithm

Delta Rule

Backpropagation

Learning Rate

0.1

0.3

Momentum 0.6

Max Epochs

4000

4000

Min. overall error

0.1

0.01

In Java, this is coded as follows:

public class XORTest {
  public static void main(String[] args){
    RandomNumberGenerator.seed=0;
       
    int numberOfInputs=2;
    int numberOfOutputs=1;
        
    int[] numberOfHiddenNeurons={2};
      
    Linear outputAcFnc = new Linear(1.0);
    Sigmoid hdAcFnc = new Sigmoid(1.0);
    IActivationFunction[] hiddenAcFnc={hdAcFnc};
        
    NeuralNet perceptron = new NeuralNet(numberOfInputs,
         numberOfOutputs,outputAcFnc);

    NeuralNet mlp = new NeuralNet(numberOfInputs,numberOfOutputs
                ,numberOfHiddenNeurons,hiddenAcFnc,outputAcFnc);
  }
}

Then, we define the dataset and the learning algorithms:

Double[][] _neuralDataSet = {
            {0.0 , 0.0 , 1.0 }
        ,   {0.0 , 1.0 , 0.0 }
        ,   {1.0 , 0.0 , 0.0 }
        ,   {1.0 , 1.0 , 1.0 }
        };
        
int[] inputColumns = {0,1};
int[] outputColumns = {2};
        
NeuralDataSet neuralDataSet = new NeuralDataSet(_neuralDataSet,inputColumns,outputColumns);
        
DeltaRule deltaRule=new DeltaRule(perceptron,neuralDataSet
                ,LearningAlgorithm.LearningMode.ONLINE);
        
deltaRule.printTraining=true;
deltaRule.setLearningRate(0.1);
deltaRule.setMaxEpochs(4000);
deltaRule.setMinOverallError(0.1);
        
Backpropagation backprop = new Backpropagation(mlp,neuralDataSet
                ,LearningAlgorithm.LearningMode.ONLINE);
        backprop.printTraining=true;
        backprop.setLearningRate(0.3);
        backprop.setMaxEpochs(4000);
        backprop.setMinOverallError(0.01);
        backprop.setMomentumRate(0.6);

The training is then performed for both algorithms. As expected, the XOR case is not linearly separable by one single layer perceptron. The neural network runs the training but unsuccessfully:

deltaRule.train();
Practical example 1 – the XOR case with delta rule and backpropagation

But the backpropagation algorithm for the multilayer perceptron manages to learn the XOR function after 39 epochs:

backprop.train();
Practical example 1 – the XOR case with delta rule and backpropagation
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset