A perceptron, also referred to as a neuron, takes in several binary inputs and produces a single binary output.

Perceptrons have certain weights associated with each given input. These weights are real numbers that express the importance of the respective input to the produced output. Essentially, the binary output of the perceptron is determined by whether the weighted sum of all inputs is less than or greater than some threshold value.

A perceptron is essentially a mathematical model that makes decisions by weighing evidence. We can replace with and we can replace with . The bias, equal to the negative threshold, is how easy it is to get a perceptron to output a 1. A larger bias means its easier for a truthy value, which a smaller or negative bias means the opposite.

We can develop learning algorithms which can automatically tune the weights and biases of a network of perceptrons, without direct intervention from a programmer. This allows us to use perceptrons in a way which is different from normal existing logic gates.

Sources