A visual journey through the building blocks of artificial intelligence. Interact with the algorithms that power modern AI — from the first artificial neuron to evolving neural networks.
Frank Rosenblatt's Perceptron was the first algorithm that could learn from data. Inspired by biological neurons, it takes inputs, multiplies each by a learnable weight, sums them, and fires if the result exceeds a threshold. The New York Times wrote it would one day "walk, talk, see, write, and be conscious of its existence."
The green line is the decision boundary — it divides the input space into two regions. Points above the line are classified as 1, below as 0. Watch it shift as the perceptron adjusts its weights each epoch. Green dots are correctly classified; red means the perceptron got it wrong.
In 1969, Minsky and Papert published Perceptrons, proving that a single-layer network cannot learn XOR. No straight line can separate the classes — the points that should be 1 are on opposite corners. This triggered the first "AI winter": years of reduced funding and lost faith.
The perceptron never converges. The boundary sweeps back and forth, always leaving at least one point on the wrong side. A single straight line cannot solve this. The solution? More layers, more neurons — or a completely different approach to learning.
What if, instead of hand-crafting a learning rule, we let neural networks evolve? Encode the network's weights as a strand of digital DNA. Spawn a population of random networks. Test each one. The fittest survive to reproduce — their genes shuffled and mutated into the next generation. Over time, evolution discovers weights that solve XOR. No calculus required.
| A | B | Expected | Output | |
|---|---|---|---|---|
| 0 | 0 | 0 | — | |
| 0 | 1 | 1 | — | |
| 1 | 0 | 1 | — | |
| 1 | 1 | 0 | — |
A search technique inspired by natural selection. Candidate solutions compete; the fittest reproduce through crossover and mutation. No gradients — just survival of the fittest.
Layers of connected neurons, each with learnable weights and biases. Data flows forward, transformed by weights and sigmoid activation. This is the fundamental building block of modern AI.
Using evolution to train neural networks instead of backpropagation. The genome encodes weights as genes. Used by Uber AI Labs and in early OpenAI research as an alternative to gradient descent.
XOR is not linearly separable — a single-layer network cannot learn it. Minsky & Papert proved this in 1969. Solving it requires a hidden layer: the proof that depth matters.