What Does Artificial Neuron Mean?
An artificial neuron is a mathematical function inspired by biological neurons in the human brain.?In artificial intelligence (AI), particularly in the context of neural networks, neurons are the basic unit of computation.
How Artificial Neurons Work
Every artificial neuron is composed of at least one input, a weight, a bias, and an activation function. Most neurons have more than one input.
Each input comes with its own weight, which is adjusted during training to reflect the input’s importance. In addition to weighted inputs, each neuron has a bias. This is an extra input that has a set value that doesn’t change. The bias allows the neuron to produce outputs other than zero, even if all the inputs are zero.
Here is an example of how an artificial neuron works:
- The neuron receives multiple input values;
- Each of these inputs is multiplied by its corresponding weight;
- All the weighted inputs are added up, and the bias is added to this sum.
- The result is passed through the activation function to produce the neuron’s output. The activation function introduces non-linearity to the model, enabling it to learn from errors and make adjustments.
Activation Function | Formula | Output Range | Pros |
---|---|---|---|
Sigmoid | f(x) = 1 / (1 + exp(-x)) | (0, 1) | Suitable for binary classification. |
Tanh | f(x) = (2 / (1 + exp(-2x))) – 1 | (-1, 1) | Stronger non-linearity, zero-centered output. |
ReLU | f(x) = max(0, x) | [0, +∞) | Computationally efficient, mitigates vanishing gradient. |
Leaky ReLU | f(x) = max(ax, x) | (-∞, +∞) | Addresses “dying ReLU” with a small negative slope. |
PReLU | f(x) = max(ax, x) with learned ‘a’ | (-∞, +∞) | Extends Leaky ReLU with learnable ‘a’ parameters. |
ELU | f(x) = x if x > 0; a * (exp(x) – 1) if x <= 0 | (-a, +∞) | Smooths for negative inputs and addresses “dying ReLU.” |
Swish | f(x) = x * sigmoid(x) | (-∞, +∞) | Self-gated function performs well in some cases. |