What separates things that last from things that fall apart?
Look around you. The chair you’re sitting on. The walls of the room. Your own body.
All of these are temporary victories against chaos.
The universe tends toward disorder. Drop a glass, it shatters. Leave a house unattended, it crumbles. Stop eating, your body breaks down. Entropy — the tendency toward randomness — is relentless.
And yet, here you are. Organized. Coherent. Persisting.
How?
This article proposes an answer: what survives is what predicts well.
Not metaphorically. Literally. From molecules to minds, the things that persist are the things that have found ways to reduce uncertainty about what comes next.
Let me show you what I mean.
Part I: The Puzzle of Persistence
Why does anything hold together?
The second law of thermodynamics says disorder increases. Mix cream into coffee, it never unmixes. Scramble an egg, it never unscrambles. Time flows toward chaos.
But look at a living cell. Billions of molecules, precisely arranged, working in concert. That’s not chaos. That’s extraordinary order.
How is this possible?
The answer lies in a crucial detail: the second law applies to closed systems. Most interesting things aren’t closed — they exchange energy with their surroundings.
A refrigerator makes things cold inside by pumping heat outside. Total entropy increases (the room gets warmer), but local entropy decreases (the food stays fresh).
Life works the same way. You maintain internal order by exporting disorder — breathing out CO₂, radiating heat, excreting waste. You’re not violating the second law. You’re paying the entropy bill elsewhere.
But which structures emerge?
Knowing that order is possible doesn’t explain which ordered structures actually appear.
Heat water from below. At a certain temperature, convection cells emerge — beautiful hexagonal patterns that nobody designed. Why hexagons? Because that shape moves heat more efficiently than random churning.
The physicist Ilya Prigogine called these “dissipative structures.” They exist because they’re good at processing energy flow. Structure emerges to serve dissipation.
But here’s the deeper question: among all possible structures, which ones persist?
Part II: Information Is Physical
A surprising discovery
In 1961, a physicist named Rolf Landauer proved something that sounds almost trivial: erasing information costs energy.
Think about it. When you delete a file on your computer, that’s not free. There’s a minimum energy required — set not by engineering limitations, but by physics itself.
This was confirmed in 2012 when researchers actually measured the heat released when a single bit of information was erased. Landauer was right.
The implication is profound: information is not abstract. It’s physical. It has weight (though immeasurably small). It obeys thermodynamic laws.
Entropy is uncertainty
Here’s where it gets interesting.
In physics, entropy measures how much you don’t know about a system’s precise state. High entropy means high uncertainty — many possible configurations.
In information theory, entropy means exactly the same thing: uncertainty about what message will arrive, what state a system is in.
These aren’t just similar concepts. They’re mathematically identical. This isn’t coincidence. It’s the universe telling us something deep:
Uncertainty is physical.
When something reduces its internal uncertainty — becomes more organized, more predictable — it’s doing something thermodynamically real.
Part III: The Predictive Entropy Hypothesis
The core idea
Here’s the hypothesis:
Systems persist to the extent that they reduce uncertainty about what comes next.
Not all uncertainty — just the uncertainty that matters for survival. And not through magic — through prediction. Building internal models that anticipate the future.
Let’s unpack why this makes sense.
Why prediction matters
Imagine two bacteria in the same environment.
Bacterium A just reacts to what’s happening right now. It bumps into food, it eats. It bumps into poison, it’s already too late.
Bacterium B has a simple model: “if the chemical concentration is increasing, keep going; if it’s decreasing, try a different direction.” It predicts where food will be and moves toward it.
Which bacterium survives longer?
Obviously B. It can find food before stumbling onto it. It can avoid danger before touching it. Prediction provides a survival advantage.
Now scale this up. Every living thing that persists has found some way to anticipate its environment:
- Your immune system predicts which molecules are “self” vs. “invader”
- A plant predicts where sunlight will be and grows toward it
- Your brain predicts what sensory input should arrive — and alerts you when it doesn’t
The selection filter
This isn’t about systems “wanting” to predict. There’s no intention required.
It’s simpler than that: systems that fail to reduce relevant uncertainty get surprised. Surprised systems make errors. Errors accumulate. Eventually, the system falls apart.
Systems that happen to reduce uncertainty — through whatever mechanism — last longer. They’re still around to observe.
We see predictors because non-predictors are already gone.
This is the same logic as evolution. Organisms don’t “want” to be fit. But fit organisms survive and reproduce. Over time, we see mostly fit organisms.
The Predictive Entropy Hypothesis extends this logic beyond biology: anything that persists — molecules, cells, organisms, organizations — does so because it has found a way to reduce uncertainty about its environment.
Part IV: Your Brain as Prediction Engine
You don’t see reality
Here’s something neuroscience has discovered: your brain doesn’t passively receive information from your senses. It actively predicts what should arrive — then checks against reality.
Try this: close your eyes and picture your living room. Where’s the couch? The window? The door?
You can “see” it. Eyes closed. That’s your brain’s prediction — its internal model of what would be there if you looked.
Now open your eyes. Your brain doesn’t start from scratch. It compares its prediction to the incoming sensory data. If they match, experience flows smoothly. If they don’t, you feel surprise — attention snaps to the mismatch.
This is called predictive processing, and it explains something puzzling: why do optical illusions work? Because your brain’s prediction overrides the raw data. You see what you expect, not what’s there.
Why brains evolved
Brains are expensive. Yours uses about 20% of your body’s energy while being only 2% of its mass. Why would evolution build something so costly?
The Predictive Entropy Hypothesis answers: because prediction is worth it.
A brain lets you model the future. Anticipate threats before they arrive. Find food before you’re starving. Plan actions before you take them. This uncertainty reduction is so valuable for survival that evolution kept building bigger, better prediction engines.
Your brain exists because predicting well keeps you alive.
Part V: The Same Pattern Everywhere
Molecules
A protein is a chain of amino acids. It folds into a specific shape — not randomly, but into the configuration with lowest energy.
That shape is also the one with lowest internal uncertainty. The atoms “know where they should be” — their positions are constrained, predictable. The protein persists because it’s stable. Unfolded proteins get recycled.
Cells
Your body replaces most of its cells regularly. Skin cells last weeks. Blood cells last months. Yet “you” persist.
How? Because each cell carries instructions for maintaining the pattern. DNA is a prediction about what proteins to build. Cellular machinery is a prediction about how to stay alive. When these predictions break down — when cells lose their internal consistency — we call it cancer.
Organizations
A company that persists has reduced uncertainty for its members: clear roles, predictable processes, stable expectations. When a company becomes chaotic — when employees can’t predict what will happen day to day — it loses people. Eventually it dissolves.
Governments, religions, families — any social structure that lasts has found ways to make the future more predictable for its members.
Culture and belonging
Why do humans form cultures? Create shared traditions? Wear team colors? Sing national anthems?
Uncertainty reduction.
A shared culture is a shared prediction system. When everyone follows the same norms, you can anticipate how strangers will behave. You know what’s polite. You know what’s offensive. You know what to expect.
Consider social norms: don’t cut in line, say please and thank you, respect personal space. These aren’t arbitrary. They’re coordination tools. When everyone follows them, social interactions become predictable. When they break down, every encounter becomes uncertain — exhausting and potentially dangerous.
Memes work the same way. A shared joke, a shared reference, a shared way of seeing the world. When you meet someone who “gets it,” uncertainty drops. You can predict how they’ll react. You know what they’ll find funny. Connection becomes easier.
This is why sports fandom feels so powerful. Wearing the same jersey as thousands of strangers creates instant predictability. You know you share something. You can anticipate their reactions. The uncertainty of “who are these people?” collapses into “these are my people.”
Belonging isn’t just emotional. It’s informational. A tribe is a group where you can predict each other.
Why politics is a prediction argument
Here’s something the framework illuminates: the fundamental divide in politics isn’t about values. It’s about prediction strategies.
Conservatism says: “We know what works. Our institutions, traditions, and norms have been tested by time. Don’t change what isn’t broken. Minimize uncertainty by preserving stability.”
Progressivism says: “The world is changing. Old solutions don’t fit new problems. We must adapt, experiment, update our models. Minimize uncertainty by staying flexible.”
Both are rational responses to uncertainty. Both are prediction strategies.
The conservative fears that change will destroy what works — introducing uncertainty where there was stability. The progressive fears that rigidity will leave us unprepared — unable to predict a changed world.
This is the explore/exploit trade-off:
- Exploit: Use what you know works. Low risk, but you might miss better options.
- Explore: Try new things. Higher risk, but you might find better solutions.
Too much exploitation → you get stuck, unable to adapt when the world changes. Too much exploration → you never build stable foundations, always chasing novelty.
The same tension appears everywhere: Should we invest in proven industries or emerging technologies? Should we focus on maintaining infrastructure or exploring space? Should we protect existing jobs or prepare for automation?
The framework suggests: both are necessary.
A society that only conserves becomes brittle. A society that only explores becomes chaotic. The healthiest systems balance both — preserving what works while testing new approaches.
The argument between conservative and progressive isn’t about who’s right. It’s about the optimal mix — and that mix depends on how fast the world is changing.
The thread
See the pattern?
- Molecules persist when their configuration is stable (low uncertainty about atomic positions)
- Cells persist when their processes are consistent (low uncertainty about internal states)
- Organisms persist when they predict their environment (low uncertainty about threats and resources)
- Organizations persist when they create predictability (low uncertainty for members)
- Cultures persist when they create shared expectations (low uncertainty about others’ behavior)
- Consciousness emerges when self-modeling improves prediction (low uncertainty about own actions)
- Politics debates the balance between stability and adaptation (both reduce uncertainty differently)
Same pattern. Different scales. Reduce uncertainty → Persist.
Part VI: Why This Explains So Much
Why learning feels good
You know that feeling when something finally makes sense? When scattered facts click into a coherent understanding?
That’s uncertainty collapsing. Your brain’s model just got simpler, more organized, more predictive. The “aha” moment is the feeling of entropy dropping.
Learning feels good because it is good — for persistence. Evolution wired reward into uncertainty reduction because reducing uncertainty keeps you alive.
Why we see patterns everywhere
Humans find faces in clouds, hear words in noise, see meaning in randomness. Sometimes we’re wrong. Why would evolution build a brain that makes mistakes?
Because the alternative is worse.
A brain that’s too conservative — that only sees patterns when absolutely certain — misses real patterns that matter. Miss the leopard hiding in the grass, and you’re dead.
Better to over-detect. False positives waste attention. False negatives cost lives.
We’re pattern-seekers because pattern-finding is uncertainty reduction, and uncertainty reduction keeps us alive.
Why phobias make evolutionary sense
Spiders. Snakes. Heights. Enclosed spaces. The dark.
Most phobias cluster around the same things — and those things were genuinely dangerous for our ancestors. A fear of snakes makes sense when one bite means death. A fear of heights makes sense when one fall ends everything.
Phobias are prediction systems with the sensitivity turned way up.
Your brain predicts “snake = danger” so strongly that even a picture triggers the response. Even a curved stick in the grass. The prediction fires before you can think.
Is this a mistake? Sometimes. You waste energy fleeing from garden hoses.
But consider the alternative: a brain that waits for certainty. That needs to examine the snake carefully before deciding. That brain gets bitten.
Phobias persist because over-predicting danger is cheaper than under-predicting it. A hundred false alarms cost less than one real snake bite.
The same logic as pattern-seeking, applied to threats: when the cost of being wrong is death, predict early and predict often.
Why chronic stress is harmful
Unpredictable environments — unstable jobs, chaotic relationships, uncertain futures — cause measurable health damage. Inflammation rises. Immune function drops. Lifespan shortens.
Why would uncertainty hurt physically?
Because chronic uncertainty is chronic prediction failure. Your brain can’t build stable models. Stress responses stay elevated (waiting for danger). Resources are diverted from maintenance to vigilance. The system degrades.
Stability isn’t just comfortable. It’s necessary for persistence.
Why expertise feels like intuition
Ask a chess master why they made a move. Often they can’t fully explain. “It felt right.”
This isn’t mysticism. It’s compressed prediction.
Years of experience have built models so refined that they run below conscious awareness. The expert doesn’t calculate — they recognize. Uncertainty about “what happens if I do X” has already been reduced through thousands of hours of feedback.
Intuition is prediction running on autopilot.
Why consciousness exists
Here’s a question that has puzzled philosophers for millennia: Why are we conscious? Why does it feel like something to be you?
The framework suggests an answer: consciousness is what prediction feels like when it becomes complex enough to include a model of itself.
Follow the logic:
A simple organism predicts its environment. Nutrients here, danger there. No self-awareness needed.
A more complex organism predicts other organisms. What will the predator do? Now you need a model of other minds.
But here’s the key: you are also part of your environment. Your own actions affect what happens next. To predict accurately, you need to predict what you will do.
This requires a model of yourself.
Self-awareness is a prediction tool. To anticipate consequences, you need to represent yourself as an actor in your model. “If I do X, then Y will happen” requires an “I” that does things.
The more complex your social environment, the more valuable self-modeling becomes. What will others think of me? How will I feel tomorrow? What kind of person am I?
All require modeling yourself as an object in the world.
Consciousness may be what this process feels like from the inside.
When you model yourself modeling the world, the loop closes. The prediction system predicts its own predictions. A “viewer” emerges — not designed, but as a logical consequence.
This explains why consciousness correlates with complexity. Simple systems don’t need self-models. Complex social systems do. Past a threshold, self-modeling becomes so valuable it emerges.
A thermostat doesn’t need to know it’s a thermostat. But you — navigating other minds and long-term consequences — benefit enormously from knowing you’re you.
Consciousness is the feeling of being a model that includes itself.
Part VII: The Deeper Unity
One principle, many expressions
Neuroscientist Karl Friston proposed the Free Energy Principle: all adaptive systems minimize surprise. Your brain, your body, any living thing — all can be described as reducing the gap between expectation and reality.
The Predictive Entropy Hypothesis connects this to physics.
The principle of least action says physical systems follow paths that minimize something called “action.” Balls roll downhill along specific paths. Light bends through water at specific angles. Planets orbit in specific ellipses. All minimizing.
These look like separate ideas — one about brains, one about physics. But mathematically, they’re the same structure.
The Informational Action Principle shows that “minimize surprise” and “minimize action” are two expressions of a single pattern. The mathematics is identical. Only the domain differs.
The same logic that makes light take the fastest path makes your brain seek accurate predictions.
This isn’t metaphor. It’s mathematics. And it suggests something profound: the universe doesn’t just allow prediction. It’s built on it.
Part VIII: What This Means
Existence has requirements
Here’s the picture that emerges:
The universe tends toward disorder. But some configurations resist — they hold together, persist, survive.
What do these configurations have in common? They’ve found ways to reduce uncertainty about their environment. Stable molecules. Self-repairing cells. Predicting brains. Ordered societies.
You exist because you’re good at predicting. Your ancestors were good at predicting. Their cells were good at maintaining order. The molecules they were made of were good at holding configuration.
At every level, the same filter: reduce uncertainty or dissolve.
You’re reading this because prediction works
Right now, your brain is predicting the next word in this sentence.
It’s using patterns learned from a lifetime of reading. It’s building expectations. When the actual word arrives, prediction meets reality.
Match → continue smoothly. Mismatch → surprise → update → try again.
This process, repeated billions of times, is why you can read at all. It’s why you can think. It’s why you exist.
The universe doesn’t care about prediction. It doesn’t reward it consciously. But it filters for it. What predicts well, persists. What doesn’t, fades.
You’re still here.
That means you’re doing something right.
Conclusion: The Price of Existence
Let’s trace the thread:
- The universe tends toward disorder
- But local order is possible — if you export entropy elsewhere
- Information is physical — uncertainty is real
- Reducing uncertainty requires prediction — models of what comes next
- Prediction costs energy — there’s always a trade-off
- Selection favors systems that predict well enough to justify the cost
- We observe what persists — and what persists is what predicts
This is the Predictive Entropy Hypothesis:
What survives is what reduces uncertainty about what comes next.
From molecules holding their shape to minds planning tomorrow — it’s the same principle, the same filter, the same requirement.
Prediction isn’t just useful.
It’s the price of existence.
The Predictive Entropy Hypothesis paper
Written by Erik Boman