Neural networks – the foundation

The inspiration for neural networks or multilayer perceptrons is the human brain and nervous system. At the heart of our nervous system is the neuron pictured above the computer analog, which is a perceptron:

Example of human neuron beside a perceptron

The neurons in our brain collect input, do something, and then spit out a response much like the computer analog, the perceptron. A perceptron takes a set of inputs, sums them all up, and passes them through an activation function. That activation function determines whether to send output, and at what level to send it when activated. Let's take a closer look at the perceptron, as follows:

Perceptron

On the left-hand side of the preceding diagram, you can see the set of inputs getting pushed in, plus a constant bias. We will get more into the bias later. Then the inputs are multiplied by a set of individual weights and passed through an activation function. In Python code, it is as simple as the one in Chapter_1_1.py:


inputs = [1,2]
weights = [1,1,1]

def perceptron_predict(inputs, weights):
activation = weights[0]
for i in range(len(inputs)-1):
activation += weights[i] * input
return 1.0 if activation >= 0.0 else 0.0

print(perceptron_predict(inputs,weights))

Note how the weights list has one more element than the inputs list; that is to account for the bias (weights[0]). Other than that, you can see we just simply loop through the inputs, multiplying them by the designated weight and adding the bias. Then the activation is compared to 0.0, and if it is greater than 0, we output. In this very simple example, we are just comparing the value to 0, which is essentially a simple step function. We will spend some time later revisiting various activation functions over and over again; consider this simple model an essential part of carrying out those functions.

What is the output from the preceding block of sample code? See whether you can figure it out, or take the less challenging route and copy and paste it into your favorite Python editor and run it. The code will run as is and requires no special libraries.

In the previous code example, we are looking at one point of input data, [1,2], which is hardly useful when it comes to DL. DL models typically require hundreds, thousands, or even millions of data points or sets of input data to train and learn effectively. Fortunately, with one perceptron, the amount of data we need is less than 10.

Let's expand on the preceding example and run a training set of 10 points through the perceptron_predict function by opening up your preferred Python editor and following these steps:

We will use Visual Studio code for most of the major coding sections later in this book. By all means, use your preferred editor, but if you are relatively new to Python, give the code a try. Code is available for Windows, macOS, and Linux.
  1. Enter the following block of code in your preferred Python editor or open Chapter_1_2.py from the downloaded source code:
train = [[1,2],[2,3],[1,1],[2,2],[3,3],[4,2],[2,5],[5,5],[4,1],[4,4]]
weights = [1,1,1]

def perceptron_predict(inputs, weights):
activation = weights[0]
for i in range(len(inputs)-1):
activation += weights[i+1] * inputs[i]
return 1.0 if activation >= 0.0 else 0.0

for inputs in train:
print(perceptron_predict(inputs,weights))
  1. This code just extends the earlier example we looked at. In this case, we are testing multiple points of data defined in the train list. Then we just iterate through each item in the list and print out the predicted value.
  2. Run the code and observe the output. If you are unsure of how to run Python code, be sure to take that course first before going any further.

You should see an output of repeating 1.0s, which essentially means all input values are recognized as the same. This is not something that is very useful. The reason for this is that we have not trained or adjusted the input weights to match a known output. What we need to do is train the weights to recognize the data, and we will look at how to do that in the next section.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset