Coding of the Kohonen algorithm

Now, it is time to get hands-on and implement the Kohonen neural network in Java. On the basis of the previous changes in the Java code and because of the application of OOP concepts, it was possible to implement new features without much effort and without rewriting the code already completed in the project. For the sake of simplicity, for now, we will implement only competitive learning and the single-neuron weight updating rule. The changes made are shown in the following table:

Class name: NeuralNet

Note: This class already exists in the previous version and has been updated as follows:

Attributes

private double[][] validationSet;

Matrix to store the validation set of input data

Methods

Note: The getters and setters methods of this attribute were created too.

Class implementation with Java: file NeuralNet.java

Interface name: Validation

Note: In Java, interfaces are structures that may have constant attributes and/or methods signatures that must be implemented inside a class.

Attributes

None

Method

public void netValidation(NeuralNet n);

Performs neural network validation, printing some results on the PC screen

Parameters: NeuralNet object (neural net trained)

Returns: -

Interface implementation with Java: file Validation.java

Class name: Kohonen

Note: This class inherits from NeuralNet and implements the Validation interface.

Attributes

None

Method

public NeuralNet train (NeuralNet n)

Trains the neural network by applying the Kohonen algorithm. This method overrides the method from the Training class

Parameters: NeuralNet object (neural net untrained)

Returns: NeuralNet object (neural net trained via Kohonen)

private NeuralNet initNet (NeuralNet n)

Initializes listOfWeightOut of the list of neurons from the input layer with zero

Parameters: NeuralNet object without the input layer initialized

Returns: NeuralNet object with the input layer initialized

private ArrayList<Double> calcEuclideanDistance (NeuralNet n, double[][] data, int row)

Calculates the Euclidian distance between the training data and the weights of the neural network

Parameters: NeuralNet object, training data, and the row of training data

Returns: List of real values with Euclidian distances

private NeuralNet fixWinnerWeights (NeuralNet n, int winnerNeuron, int trainSetRow)

Adjusts weights of the winner neuron (on the basis of the Euclidian distance list)

Parameters: NeuralNet object, winner neuron index, training set row number

Returns: NeuralNet object with weights from the input layer modified

public void netValidation(NeuralNet n)

Adjusts weights of the winner neuron (on the basis of the Euclidian distance list)

Parameters: NeuralNet object with the neural net trained

Returns: -

Class implementation with Java: file Kohonen.java

The class diagram changes are shown in the following figure. Attributes and methods already explained in previous chapters and their configuration methods (getters and setters) are not shown.

Coding of the Kohonen algorithm

Exploring the Kohonen class

The Kohonen class implements a validation interface that provides a validation method to ensure that the correct output neuron was chosen. Let's concentrate on three key methods present in this class: calcEuclideanDistance, fixWinnerWeights, and train.

The Euclidean distance is calculated according to the equation shown in the Section SOM learning algorithm, as can be seen in the following code:

  private ArrayList<Double> calcEuclideanDistance(NeuralNet n, double[][] data, int row) {
    ArrayList<Double> listOfDistances = new ArrayList<Double>();
    
    int weight_i = 0;
    for(int cluster_i = 0; cluster_i < n.getOutputLayer().getNumberOfNeuronsInLayer(); cluster_i++) {
      
      double distance = 0.0;
      
      for(int input_j = 0; input_j < n.getInputLayer().getNumberOfNeuronsInLayer(); input_j++) {
        
        double weight = n.getInputLayer().getListOfNeurons().get(0).getListOfWeightOut().get(weight_i);
        distance = distance + Math.pow(data[row][input_j] - weight, 2.0);
        weight_i++;
        
      }
    
      listOfDistances.add(distance);
      
      //System.out.println("distance normal "+cluster_i+": "+distance);
    }
    return listOfDistances;
    
  }

This method receives as a parameter the dataset for computing the distances of all neurons to a certain row of this dataset. We can see in this method two for loops: The outer loop iterates over all the neurons in the output layer, whereas the inner loop iterates over all the input variables of the corresponding row in the dataset. The distance is finally calculated after the inner loop is executed and is saved in a list of distances that will be returned.

The weight update rule is implemented in the fixWinnerWeights method, which already receives as the parameter the winner neuron. The code of this method is listed as follows:

  private NeuralNet fixWinnerWeights(NeuralNet n, int winnerNeuron, int trainSetRow) {
    int start, last;
    
    start = winnerNeuron * n.getInputLayer().getNumberOfNeuronsInLayer();
    
    if(start < 0) {
      start = 0;
    }
    
    last = start + n.getInputLayer().getNumberOfNeuronsInLayer();
    
    List<Double> listOfOldWeights = new ArrayList<Double>();
    listOfOldWeights = n.getInputLayer().getListOfNeurons().get( 0 ).getListOfWeightOut().subList(start, last);
    
    ArrayList<Double> listOfWeights = new ArrayList<Double>();
    listOfWeights = n.getInputLayer().getListOfNeurons().get( 0 ).getListOfWeightOut();
    
    int col_i = 0;
    for (int j = start; j < last; j++) {
      double trainSetValue = n.getTrainSet()[trainSetRow][col_i];
      double newWeight = listOfOldWeights.get(col_i) + 
          n.getLearningRate() * 
          (trainSetValue - listOfOldWeights.get(col_i));
      
      //System.out.println("newWeight: " + newWeight);
      
      listOfWeights.set(j, newWeight);
      col_i++;
    }
    
    n.getInputLayer().getListOfNeurons().get( 0 ).setListOfWeightOut( listOfWeights );
    
    return n;
    
  }

First, the code determines the weights that should be updated, which implies the winner neuron's weights, from start to end. Then, in the inner for loop, the new weight is assigned. Note the subtraction of the input value (trainSetValue) and the old weight.

Finally, let's check how these functions are used together in the Train method. In order to save space, we will focus only on the epoch loop:

    for (int epoch = 0; epoch < n.getMaxEpochs(); epoch++) {
      
      //System.out.println("### EPOCH: "+epoch);
    
      for (int row_i = 0; row_i < rows; row_i++) {
        listOfDistances = calcEuclideanDistance(n, trainData, row_i);
        
        int winnerNeuron = listOfDistances.indexOf(Collections.min(listOfDistances));
        
        n = fixWinnerWeights(n, winnerNeuron, row_i);
        
      }
    
    }

For every row in the training set, the distances are calculated using the Euclidean distance and right after that, the winner neuron is determined. Then, the weights are updated, and the learning process moves to the next iteration.

Kohonen implementation (clustering animals)

In this section, we will explain the Kohonen algorithm in practice. Imagine that we have some animals and three of their characteristics are: has pelage (Yes/No), is terrestrial (Yes/No), and has mammary glands (Yes/No). Our goal is to cluster the animals in two different groups that we do not know yet. The following table summarizes this data:

#

Animal

Has pelage

(Y = 1 / No = -1)

Is terrestrial

(Y = 1 / No = -1)

Has mammary glands

(Y = 1 / No = -1)

1

Bat

1

-1

1

2

Shark

-1

-1

-1

3

Sea-cow

-1

-1

1

4

Spider

1

1

-1

5

Hippo

-1

1

1

6

Fly

1

-1

-1

7

Viper

-1

1

-1

8

Monkey

1

1

1

The following figure displays the architecture of the Kohonen neural net used for solving this problem:

Kohonen implementation (clustering animals)

Next, let's analyze the test method called testKohonen(). It is as follows:

private void testKohonen(){
    NeuralNet testNet = new NeuralNet();
    
    //2 inputs because "bias"
    testNet = testNet.initNet(2, 0, 0, 2);
    
    NeuralNet trainedNet = new NeuralNet();
    
    testNet.setTrainSet(new double[][] { { 1.0, -1.0, 1.0 },       { -1.0, -1.0, -1.0 }, { -1.0, -1.0,  1.0 }, { 1.0, 1.0, -1.0 },       { -1.0,  1.0,  1.0 }, {  1.0, -1.0, -1.0 } });
    
    //viper and monkey, respectively:
    testNet.setValidationSet(new double[][] { {-1.0, 1.0, -1.0}, {1.0, 1.0, 1.0} } );
    
    testNet.setMaxEpochs(10);
    testNet.setLearningRate(0.1);
    testNet.setTrainType(TrainingTypesENUM.KOHONEN);
    
    trainedNet = testNet.trainNet(testNet);

    System.out.println();
    System.out.println("---------KOHONEN VALIDATION NET---------");

    testNet.netValidation(trainedNet);

The Kohonen test logic follows the same steps as those used in the previous implementations. First, an object of the NeuralNet class is created and used for initializing the net with three neurons in the input layer, and two neurons in the output layer that represents the number of clusters to achieve.

After that, samples of rows 1 to 6 from the preceding table are used for the training and those from the last two rows are used for validating the neural net. It is important to ensure that the data used for the validation is not the same as that used for training the neural net. To conclude, a method to train the neural net is called.

When this case of test reaches the end, it generates the validation results shown next.

Kohonen implementation (clustering animals)

By analyzing the validation results, we find that the neural net is able to cluster two different kinds of animals:

  • Cluster 1: Mammal (monkey)
  • Cluster 2: Not mammal (viper)
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset