2.4 Geometric interpretation
Geometric interpretation
The perceptron can be functionally visualized as
In classification, the model tries to separate classes.
Geometrically:
-
In 2D → Decision boundary is a line
-
In 3D → Decision boundary is a plane
-
In nD → Decision boundary is a hyperplane
Example: Perceptron
Equation:
This represents a line in 2D.
In this
case, the vector X is an input vector of dimensionality n, x⊂R,
whose elements are the values x1,x2...xn.
Note that a ‘1’ is concatenated to the front of any input vector.
These values
are all multiplied by a corresponding weight, w0 to wn. w1 to
wn are called correlating weights, and w0,
which is multiplied against the ‘1,’ is called the bias.
These products are all summed, generating a weighted sum of the inputs.
Put simply, this function just checks the
sign of its input, and returns 0 if the input is negative, 1 otherwise.
Taken all together, the perceptron
classification model is simply
where the left side of the equation is the predicted label for a point, s is the activation function described above, and the dot product w⋅x is equal to the weighted sum of the elements of that point.
If the weights are used to define some
decision boundary, then the above classification function tells us whether
data x is above or below the boundary. This is done
mathematically by seeing if w⋅x is greater than or less than zero.
Let’s consider the decision
boundary of the perceptron algorithm that is given by the equation:
This is the place where the model output changes is the place where the linear combination w⋅x changes sides.