Input | Output | |||
Example 1 | 0 | 0 | 1 | 0 |
Example 2 | 1 | 1 | 1 | 1 |
Example 3 | 1 | 0 | 1 | 1 |
Example 4 | 0 | 1 | 1 | 0 |
Input | Output | |||
New input | 1 | 0 | 0 | ? |
This network won't have any hidden layers and will look like this:
The input values will in our case be either 0 or 1. Each synapse will be given a random weight. After passing through the neuron which will do a weighted sum of the inputs we'll put it through a normalizing function to get the output to either a 0 or a 1. For this, we'll use the Sigmoid normalizing function.
- Take the input from the training example and put them through the formula to get the neurons output
- Calculate the error, which is the difference between the output we got and the actual output
- Depending on the severity of the error, adjust the weights accordingly
- Repeat the process 100000 times
We'll multiple the error, which is the difference between the expected output and the actual output, with the input - which is either a 0 or a 1. Then we take the gradient of the Sigmoid function at our output.