Interactive
Neural Network Lab
Interactive visual neural network learning — no black boxes.
Live Network
Gradient Descent
The model moves along the loss landscape in the direction that reduces error.
Activation Function
ReLU
max(0, x) — outputs zero for negative inputs, linear for positive. Prevents gradient vanishing for positive activations.
Backpropagation Trace
Forward Pass
Inputs travel left to right through the network, each layer computing a weighted sum and activation.
What is a neuron?
A neuron is a computational unit that takes weighted inputs, adds a bias, and passes the sum through an activation function. It is the fundamental building block of every neural network.
What are weights?
Weights are learned parameters that control the strength of connections between neurons. During training, the optimizer adjusts weights to reduce prediction error.
What is bias?
Bias is an extra learnable parameter added before the activation function. It allows the neuron to shift its activation threshold independently of input values.
What is activation?
An activation function introduces non-linearity, allowing networks to learn complex patterns. Without activation functions, a stack of layers would collapse to a single linear transformation.
What is loss?
Loss measures how wrong the network's predictions are. Common loss functions: Mean Squared Error for regression, Cross-Entropy for classification. Lower loss = better predictions.
How learning updates weights?
The optimizer computes gradients of the loss w.r.t. each weight, then nudges weights in the direction that reduces loss. Learning rate controls step size. This repeats across all training examples.