A Fermi Paradox Rambling Grounded in Evolution and Game Theory
I'm here to sell you a boring universe. Not empty --- boring. Full of things that look like us, think like us, worry like us, and then either kill themselves or sit quietly until their sun goes red.
The silence isn't because intelligence is rare. It's because intelligence, at the level we'd notice, is either dead or hiding.
WHAT GOOD'S A HALF A WING?
There's this old argument about evolution. "What good is half a wing?" The creationists loved it. If a wing needs to be fully formed to work, they'd say, how could it evolve gradually?
The answer, of course, is that half a wing is useful. It helps you glide. It helps you insulate. It helps you show off to potential mates. Evolution doesn't need the finished product to start benefiting from the parts.
Evolution doesn't build for the future. It builds for right now. A trait sticks around if it doesn't kill you before you fuck, simple as that.
Consider the human spine. Degrades after thirty. For any species, a failing spine is bad. But humans reproduce way before thirty (or used to, at least). The genes that turn your back into a problem by fifty? They sail through the filter. Evolution never sees them.
Same with intelligence.
Crows are smart. Most birds aren't. Does every bird need crow-smarts to survive? No. The crow's intelligence helps it solve problems, but its absence doesn't doom a sparrow. Intelligence, once it appears, persists not because it's essential --- but because it's not harmful enough to breed out.
Human-level intelligence is just this, amplified.
For most of our history, being clever helped us find food, navigate drama, pass on genes. It didn't threaten us because we lacked the tech to make that threat real. The downside? Nukes, climate collapse, AI we can't control. All arrives after we've reproduced. Evolution doesn't see it. Can't select against it.
Intelligence is an evolutionary free rider. It hitches along because its extinction-level consequences manifest too late to matter at the genetic level.
THE USELESS-USEFUL TRANSITION
Here's the thing nobody tells you about intelligence: for most of its existence, it's locally useful and cosmically irrelevant.
It helps you find food. It doesn't help you survive an asteroid.
A species with human-level intelligence in 10,000 BCE was just as vulnerable to a rogue comet as a species of dolphins. All that brainpower, and you're still dead if the sky falls.
The transition to cosmically useful intelligence, the kind that can deflect asteroids, leave the planet, engineer stars. It requires crossing a threshold. You need physics. You need industry at scale. You need global coordination.
But here's the trap: you cross that threshold long before you're ready for it.
The same knowledge that builds telescopes builds bombs. The same industry that could one day construct space habitats currently pumps CO₂ into your atmosphere. The same connectivity that might eventually unite your species currently spreads conflict like a virus.
Intelligence, at this stage, is giving a teenager the keys to a nuclear submarine. The capability arrives before the wisdom. And because evolution never selected for wisdom at this scale --- why would it? --- there's no guarantee the wisdom ever arrives.
Do dolphins need anxiety? No. They're fine. They're swimming, eating, fucking. They don't lie awake wondering if the other pod is building a bomb. They're fine.
We're not fine. And we can't go back.
THE GAME WILL FIND A WAY
You want to know why cooperation fails at scale? Game theory. Full stop.
Every actor has an incentive to cooperate for the common good. Every actor has a stronger incentive to defect while others cooperate. If everyone cooperates, everyone wins moderately. If everyone defects, everyone loses. But if you defect while others cooperate? You win big. At least in the short term.
This isn't a failure of morality. It's a failure of incentive structures.
Small bands can enforce cooperation. I know you; you know me; cheating has immediate social consequences. But at the scale of nations, of billions of people, anonymous, disconnected, the enforcement breaks down. There's no global police. No world government. No way to ensure the other player won't cheat first.
The result is a stable trap, a technological species that knows it's heading toward disaster but cannot coordinate to prevent it.
Nuclear weapons are the perfect example.
Globally optimal: zero nukes.
Nationally optimal: only I have nukes :)
Second-best: Fallout games (big iron on his hiiiiiiiiip...).
Worst: only my enemy has nukes :(
Terrible: İ̴͈͗s̷͙̝͋ŗ̴̞̿à̴͔͙̍e̴͈͎̅͗l̵̞̑ ̵̰͇̉̎h̸̞͋͊á̸̹s̴̬͙̀ ̴̛̝͝ţ̸̛ḥ̸̲̏͝e̷̻̱͝m̷̭͉̐.̴̼͛͜
Given these payoffs, rational actors pursue the nationally optimal outcome. Every time. Cooperation fails not because anyone is irrational, but because the math makes defection the locally rational choice.
You think aliens are above this? They do physics. They'll find fission. They'll face the same dilemma. The math doesn't care about biology.
The first man to fence common land wasn't evil. He was just playing the game as structured. Once enclosure is possible, the rational move is to enclose before others do. Same with weapons. Same with power. Same with information.
The game will find a way. It always does.
THREE WAYS THIS ENDS
Given the trap, what happens to a typical technological species?
1. Rapid annihilation.
Some species don't survive. They trigger their own extinction within centuries of reaching the threshold. In geological time, they appear and vanish in an instant. They leave no traces we could detect from afar, just a brief pulse of industrial chemicals in an atmosphere, maybe, before silence.
2. Long-term struggle.
Some avoid annihilation but never solve coordination. They muddle through, century after century, always on the edge. They might develop partial solutions (arms control, international institutions, cultural norms) but never full cooperation. They remain planet-bound, resource-constrained, largely invisible. They aren't building Dyson spheres or broadcasting existence. They're just... surviving. The galaxy could be full of such worlds and we'd never know.
3. Rare transcendence.
A tiny fraction solve the trap. They develop global cooperation, escape the prisoner's dilemma, expand into their solar system and beyond. These are the civilizations that might build megastructures, launch generation ships, transform into post-biological entities. They are also, by definition, rare—otherwise the galaxy would be noisy with their activity.
Note what this means: the species we might hope to detect, Kardashev II and III, are the exceptions, not the rule. Most intelligence never gets there. Most either dies young or lives quietly, never leaving a detectable mark.
bonus point.
T̸h̶e̷ ̸v̸̞̤̿̈ö̸̡̰́͂ị̵͔̐d̵̰̠͓̘̦͒c̵̖̪̚͝ơ̸̳̬͂n̵͈̕͠ś̷̹u̵̲̓m̷̹͍͝ẹ̸̕͝s̴͓͐̽ ̷̖̽ù̵̺̚s̵͎̐̓ ̷͍̜̀͌a̶͙̐͂l̸̼͓̈́l̶̪̍͑.̷̨̞̐͝
WHAT THIS MODEL (?) DOESN'T CLAIM
Let's be clear about boundaries.
I'm not claiming artificial intelligences don't exist. They might. They're black boxes with inscrutable goals. Could be out there, silent for reasons we can't fathom. But we don't need them to explain Fermi, so invoking them is unnecessary.
I'm not claiming transcendence is impossible. Just rare. The species that solve the trap are the exceptions, and their rarity explains why we don't see obvious signs.
I'm not claiming intelligence is useless. It's locally useful, which is why it persists. It's just not protective at the species level until very late.
And I'm not quantifying anything. I don't know what percentage survive, what fraction become strugglers versus annihilators, how many transcend. The model identifies a filter. It doesn't measure its mesh size.
THE CROWS IN THE DARK
So where is everybody? --- asked everyone, everywhere, eversince.
T̶h̸e̶y̵ ̷a̷r̴e̵ ̴i̷n̵ ̶y̵o̵u̵r̴ ̸w̵a̷l̸l̷s̶. Sitting on their worlds, moderately intelligent, moderately anxious, moderately doomed. Some will kill themselves. Most will struggle indefinitely. A tiny few will figure it out and leave --- but they'll leave quietly, because the universe punishes loudmouths.
The silence from the stars doesn't mean intelligence is rare. It means intelligence, at the level we might hope to detect, is either short-lived or quiet. The universe could be teeming with species much like us, clever enough to worry, not clever enough to escape. Each listening to the silence and drawing the wrong conclusion.
We're not alone. We're just not loud. And neither, probably, are they.
The crows are out there, sitting in their own dark forests, cawing into the void and hearing nothing back. Not because no one's there. Because everyone's too busy surviving to shout.
Bye!
- M.O. Valent, 07/03/2026