Imagine a way to make an enemy or game system emulate the way the brain works. That's how neural networks operate. They are based on a neuron, we call it Perceptron
, and the sum of several neurons; its inputs and outputs are what makes a neural network.
In this recipe, we will learn how to build a neural system, starting from Perceptron
, all the way to joining them in order to create a network.
We will need a data type for handling raw input; this is called InputPerceptron
:
public class InputPerceptron { public float input; public float weight; }
We will implement two big classes. The first one is the implementation for the Perceptron
data type, and the second one is the data type handling the neural network:
Perceptron
class derived from the InputPerceptron
class that was previously defined:public class Perceptron : InputPerceptron { public InputPerceptron[] inputList; public delegate float Threshold(float x); public Threshold threshold; public float state; public float error; }
public Perceptron(int inputSize) { inputList = new InputPerceptron[inputSize]; }
public void FeedForward() { float sum = 0f; foreach (InputPerceptron i in inputList) { sum += i.input * i.weight; } state = threshold(sum); }
public void AdjustWeights(float currentError) { int i; for (i = 0; i < inputList.Length; i++) { float deltaWeight; deltaWeight = currentError * inputList[i].weight * state; inputList[i].weight = deltaWeight; error = currentError; } }
public float GetIncomingWeight() { foreach (InputPerceptron i in inputList) { if (i.GetType() == typeof(Perceptron)) return i.weight; } return 0f; }
Perceptron
as a network:using UnityEngine; using System.Collections; public class MLPNetwork : MonoBehaviour { public Perceptron[] inputPer; public Perceptron[] hiddenPer; public Perceptron[] outputPer; }
public void GenerateOutput(Perceptron[] inputs) { int i; for (i = 0; i < inputs.Length; i++) inputPer[i].state = inputs[i].input; for (i = 0; i < hiddenPer.Length; i++) hiddenPer[i].FeedForward(); for (i = 0; i < outputPer.Length; i++) outputPer[i].FeedForward(); }
public void BackProp(Perceptron[] outputs) { // next steps }
int i; for (i = 0; i < outputPer.Length; i++) { Perceptron p = outputPer[i]; float state = p.state; float error = state * (1f - state); error *= outputs[i].state - state; p.AdjustWeights(error); }
Perceptron
layers, but the input layer:for (i = 0; i < hiddenPer.Length; i++) { Perceptron p = outputPer[i]; float state = p.state; float sum = 0f; for (i = 0; i < outputs.Length; i++) { float incomingW = outputs[i].GetIncomingWeight(); sum += incomingW * outputs[i].error; float error = state * (1f - state) * sum; p.AdjustWeights(error); } }
public void Learn( Perceptron[] inputs, Perceptron[] outputs) { GenerateOutput(inputs); BackProp(outputs); }
We implemented two types of Perceptrons in order to define the ones that handle external input and the ones internally connected to each other. That's why the basic Perceptron class derives from the latter category. The FeedForward
function handles the inputs and irrigates them along the network. Finally, the function for back propagation is the one responsible for adjusting the weights. This weight adjustment is the emulation of learning.