Digital DNA

A visual journey through the building blocks of artificial intelligence. Interact with the algorithms that power modern AI — from the first artificial neuron to evolving neural networks.

scroll to begin
1958

The Perceptron

Frank Rosenblatt's Perceptron was the first algorithm that could learn from data. Inspired by biological neurons, it takes inputs, multiplies each by a learnable weight, sums them, and fires if the result exceeds a threshold. The New York Times wrote it would one day "walk, talk, see, write, and be conscious of its existence."

Click Train

The green line is the decision boundary — it divides the input space into two regions. Points above the line are classified as 1, below as 0. Watch it shift as the perceptron adjusts its weights each epoch. Green dots are correctly classified; red means the perceptron got it wrong.

1969

The XOR Problem

In 1969, Minsky and Papert published Perceptrons, proving that a single-layer network cannot learn XOR. No straight line can separate the classes — the points that should be 1 are on opposite corners. This triggered the first "AI winter": years of reduced funding and lost faith.

Click Train

The perceptron never converges. The boundary sweeps back and forth, always leaving at least one point on the wrong side. A single straight line cannot solve this. The solution? More layers, more neurons — or a completely different approach to learning.

Evolution

Neuroevolution

What if, instead of hand-crafting a learning rule, we let neural networks evolve? Encode the network's weights as a strand of digital DNA. Spawn a population of random networks. Test each one. The fittest survive to reproduce — their genes shuffled and mutated into the next generation. Over time, evolution discovers weights that solve XOR. No calculus required.

Gen 0
Fitness Over Generations
Best Network —
XOR Outputs
ABExpectedOutput
000
011
101
110
Population
Organisms: 100
Genome: 17 genes
Network: 2 → 4 → 1
Mutation: 15% rate
Crossover: 70%
Selection: Tournament (k=3)

Genetic Algorithm

A search technique inspired by natural selection. Candidate solutions compete; the fittest reproduce through crossover and mutation. No gradients — just survival of the fittest.

Neural Network

Layers of connected neurons, each with learnable weights and biases. Data flows forward, transformed by weights and sigmoid activation. This is the fundamental building block of modern AI.

Neuroevolution

Using evolution to train neural networks instead of backpropagation. The genome encodes weights as genes. Used by Uber AI Labs and in early OpenAI research as an alternative to gradient descent.

The XOR Problem

XOR is not linearly separable — a single-layer network cannot learn it. Minsky & Papert proved this in 1969. Solving it requires a hidden layer: the proof that depth matters.