The Simple Version

Let's explain AI like you're five. No math, no jargon — just simple ideas that build on each other.

scroll to begin
The Building Block

What is a Neuron?

Imagine a tiny voting machine. It listens to a few yes/no inputs, but it doesn't trust all of them equally — some inputs are more important than others. It adds up the votes (weighted by importance), and if the total is high enough, it says "YES". Otherwise, it says "NO".

That's a neuron. Toggle the inputs below and adjust the weights to see how the neuron makes a decision about whether to bring an umbrella.

Umbrella Neuron Toggle inputs to explore
w: 0.4
w: 0.6
w: 0.3
0.50
Σ
sum = 0.00
Bring umbrella:
???

Notice how changing the weights changes which inputs matter most. If you crank up the forecast weight and turn off the others, the neuron only cares about the forecast. The threshold controls how cautious the neuron is — lower means it says YES more easily.

Getting Smarter

Learning from Mistakes

In real AI, nobody hand-picks the weights. The neuron learns them. Here's how: it starts with random guesses. Then it checks its answers against reality. When it's wrong, it adjusts.

Got rained on without an umbrella? Increase the weights for the inputs that were "on" that day. Carried an umbrella for nothing? Decrease them. Do this thousands of times, and the weights settle into the right values. This is the perceptron learning rule — the simplest form of machine learning.

Weights

How much the neuron trusts each input. High weight = "I listen to this a lot." Low weight = "I mostly ignore this."

Threshold

The bar the weighted sum must clear. Like a "confidence level" the neuron needs before making a YES decision.

Learning Rate

How big each adjustment is. Too big and the neuron overshoots. Too small and it takes forever. Finding the sweet spot is key.

The Catch

Some Questions Are Too Hard for One Neuron

One neuron can learn AND ("bring umbrella if it's cloudy and the forecast says rain"). It can learn OR ("bring umbrella if either one is true"). But what about this rule:

"Bring an umbrella if it's cloudy OR if your friend says so — but NOT if both are true."

This is the XOR problem (exclusive or). Try to draw one straight line that separates the YES answers from the NO answers on a grid. You can't — and neither can one neuron. A single neuron can only learn patterns that a straight line can separate.

Why One Line Fails

The green dots (YES) sit on opposite corners. No single line can separate them from the red dots (NO).

This discovery in 1969 was devastating. Researchers thought single neurons could learn anything. When they proved XOR was impossible, funding dried up and progress stalled for years — the first "AI winter."

The Solution

Teamwork: Layers of Neurons

One neuron can't solve XOR, but a team of neurons can. Put a few neurons in a "hidden layer" between the inputs and the output. Each hidden neuron learns a different partial pattern. Then the output neuron combines their answers to solve the whole puzzle.

This is a neural network. It's just neurons connected in layers. The first layer looks at the raw data. The middle layers find patterns. The last layer gives the answer. Add more layers and more neurons, and you can solve harder and harder problems.

Team of Neurons

Two inputs → a hidden layer that finds sub-patterns → one output that combines them.

That's the foundation. Everything in modern AI — image recognition, language models, self-driving cars — is built from layers of these simple voting machines, stacked deep. The next chapter shows how evolution can train these networks without anyone writing learning rules by hand.

Hidden Layer

Neurons between input and output that nobody sees directly. They find intermediate patterns — the "stepping stones" to the answer.

Depth

More layers = deeper network = more complex patterns. "Deep learning" literally means learning with many layers.