3.6 The Back-propagation Algorithm
The Back-propagation Algorithm: -
Backpropagation
- The Backpropagation Algorithm is a supervised learning algorithm used to train artificial neural networks, especially Multi-Layer Perceptrons (MLPs).
- Its main purpose is to improve the accuracy of a neural network by reducing errors.
- Backpropagation helps the neural network learn from its mistakes and adjust itself to make better predictions.
- It works by calculating the error between predicted and actual outputs and adjusting the weights and biases of the network to reduce this error.
Structure of a Neural Network
A neural network contains three main layers:
1. Input Layer
- The input layer receives the data.
- Each neuron represents one feature of the input.
Example inputs:
-
Study hours
-
Attendance
-
Previous marks
2. Hidden Layer
- The hidden layer processes the information.
Here the data is:
-
multiplied by weights
-
adjusted using bias
-
passed through activation functions such as ReLU or Sigmoid
A neural network may have one or more hidden layers.
3. Output Layer
- The output layer produces the final prediction.
Example outputs:
-
Pass / Fail
-
Spam / Not Spam
-
Cat / Dog
Objective of Backpropagation: -
- The main goal of backpropagation is to reduce the difference between predicted output and actual output. This difference is called error or loss.
- The algorithm adjusts the weights and biases of the network so that the error becomes smaller and smaller over time.
- A loss function (such as Mean Squared Error) is used to measure how large the error is.
Goals of the Backpropagation Algorithm
1. Minimize the Error
Backpropagation reduces the difference between predicted results and actual results.
2. Optimize Model Parameters
The algorithm updates weights and biases so the model becomes more accurate.
3. Improve Network Performance
By repeatedly adjusting the model, it learns patterns better and produces more accurate predictions.
4. Enable Multi-Layer Learning
Backpropagation allows neural networks to learn through multiple hidden layers, which helps solve complex problems.
Two Main Stages of Backpropagation
Backpropagation works in two stages:
-
Forward Pass
-
Backward Pass
1. Forward Pass
In the forward pass, data moves from the input layer to the output layer.
Steps:
-
Input data enters the network.
-
Data passes through hidden layers.
-
Each neuron applies weights, bias, and activation function.
-
Finally, the network produces an output.
Example:
Input:
Study hours = 5
Attendance = 80%
The network predicts:
Prediction = Pass
But suppose the actual result is Fail.
So there is an error.
2. Backward Pass (Error Propagation)
After calculating the error, the backward pass starts.
The goal is to reduce the error by adjusting weights and biases.
Steps:
-
The error from the output layer is sent backward through the network.
-
The algorithm calculates how much each weight contributed to the error.
-
Using this information, the weights are updated to reduce the error.
This calculation uses gradients (derivatives).
The process repeats many times during training.
Each time:
-
Prediction is made
-
Error is calculated
-
Weights are adjusted
Gradually the error becomes smaller, and the model becomes more accurate.