Activation functions are parts of artificial neural networks that calculate the output of a node based on its inputs and weights. Essentially, it decides if a Perceptron should be activated or not. Typically, all hidden layers use the same activation function, and the output layer might use a different activation function to produce sensible outputs. The purpose of activation functions is to add non-linearity to the neural network. There are two main types of activation functions:
- Linear Activation Function
- Non-Linear Activation Function
Some of the most widely used activation functions are:
- Rectifier Linear Unit Activation Function (ReLU)
- Sigmoid Activation Function
- Softmax Activation Function
- Hyperbolic Tangent Activation Function (tanh)