Leveling Up
When I was eleven, I just got good.
-Bobby Fischer
Human skill doesn’t improve the way we think it does.
The story we tell ourselves is simple: put in the hours, get better, repeat. Effort compounds. And this is true, until it isn’t. At some point, the model breaks down completely. You hit a wall. You grind for weeks, months, sometimes years, and nothing moves. Your rating is flat. Your performance is flat. You’re working harder than ever and can’t get better.
Then one day, without warning, everything clicks. You leap forward. Multiple levels at once.
The Chess Problem
This phenomenon is so well-documented in chess that it’s practically a rite of passage. GM Noël Studer, who coaches competitive players, describes the pattern: first a dip, then a big jump. Players who work on new approaches to the game see their ratings drop before they surge past their old ceiling. The temptation during the dip is to retreat to old habits. The ones who don’t are the ones who break through.
Chess.com columnist Jeremy Silman tells the story of his own trajectory: stuck around 1900 for years, considered quitting, then experienced what he described as an overnight jump from roughly 1900 to 2200 strength. A discontinuous leap. He went undefeated in his next tournament and never looked back. Pattern recognition, he argued, was the mechanism, but the timing of when it all came together was impossible to predict or force.
Asked what happened when Bobby Fischer suddenly started beating top players at thirteen, he said: “I got better.” As if some internal threshold had been crossed and the transformation was self-evident.
The Video Game Metaphor
If you’ve played any RPG, you know the experience system. You accumulate XP through a thousand small actions (killing mobs, completing quests, grinding). The bar fills up slowly. Then it hits the threshold, and you level up. New abilities unlock. Your stats jump. At higher levels, the XP requirements get steeper. It takes longer. But the jump, when it comes, is distinct.
This is a better model of human skill acquisition than the smooth curve we were taught. Except reality is even weirder than the game version. In games, the XP bar fills at a predictable rate. In real life, you can be grinding at what looks like zero XP gain for months. The bar appears frozen, and then you suddenly discover you’ve leveled up multiple times at once.
The subskills that seemed disconnected snap into alignment, and you find yourself operating at a level beyond what you thought.
Deliberate Practice in Reality
This pattern shows up everywhere that demands deliberate practice. In professional StarCraft: Brood War, a game that demands mechanical execution, strategic decision-making, and real-time adaptation, players hit plateaus that last entire competitive seasons. Korean pro-gamers describe periods of intense practice yielding no measurable results, followed by sudden jumps in capability that feel inexplicable even to them.
George Leonard, an aikido master and the author of Mastery, built an entire framework around this observation. He identified what he called the mastery curve: long plateaus punctuated by brief spurts of visible improvement, followed by slight dips, followed by a new, higher plateau. Leonard argued that the plateau is the path. The plateau is where the real learning happens, invisibly, beneath the surface. The spurt is just the moment when it becomes legible.
Neuroscientist Karl Pribram offered a mechanistic explanation. He described hypothetical brain-body systems, including what he called the “habitual behavior system.” Our cognitive and effort systems are involved in deliberate learning, but the visible spurts occur only after that conscious work has been incorporated into the unconscious habitual system. If that sentence was confusing, check out my DIY Therapy post.
When you talk to coaches (chess coaches, music teachers, martial arts instructors, competitive gaming coaches), they all accept this as baseline reality. They’ve seen it hundreds of times. It’s like talking to a writer about why the hero’s journey resonates with our minds. They just know it’s how we’re wired.
The Machine Learning Contrast
What interests me most about this is how different it is from how machines learn.
When you train a neural network, the loss curve goes down. You can watch the gradient descent do its work epoch by epoch. The model gets a little better, then a little better, then a little better. There’s no plateau followed by a sudden awakening. There’s no moment where the model “just gets good.”
The step-function gains in AI don’t come from the model spontaneously reorganizing its understanding. They come from brute force on the outside: more compute, more data, more parameters. We scale the infrastructure, and the performance scales with it. We have a breakthrough in the amount of resources we throw at it.
Grokking the Exception
However, there’s one exception: a machine learning phenomenon called grokking. First documented by researchers at OpenAI in 2022, grokking occurs when a neural network trains on a small dataset, memorizes it completely (overfitting), and then (long after training loss has plateaued) suddenly generalizes. Test accuracy jumps from chance to near-perfect. The network appears to “understand” the underlying pattern all at once, well after it seemed to have stopped learning anything useful.
The parallels to human plateau-and-breakthrough are striking. But there are differences. Grokking requires specific conditions (e.g., particular dataset sizes, regularization strengths, and weight decay parameters). Change the hyperparameters slightly, and it disappears. It’s not the default mode. For humans, plateau-and-breakthrough is the default mode. We don’t need our hyperparameters tuned just right to experience it.
The Science
Neuroscience research supports this. A 2022 paper in Current Research in Neurobiology argued that what looks like slow, incremental learning in laboratory settings actually decomposes into two parallel processes: knowledge acquisition, which is rapid and step-like, and behavioral expression, which is slower and more variable.
The “smooth learning curve” we measure in experiments is because we look at averages. We learn by many sudden jumps in understanding, which result in noisy behavioral output. The brain learns in steps. We just can’t always see them from the outside.
What This Means
There’s something uniquely human in this pattern. All the deliberate practice on seemingly disconnected subskills (strategy, tactics, openings) accumulates invisibly until a critical mass is reached and the whole system reorganizes.
Computer improvement is arithmetic. More input, more output. They’re dimmer switches. Humans are combination locks.
The plateau is the lock not budging yet. Keep dialing.

