These days, some of the tech-savvy kids worry about extinction from rogue AI, and the rest worry about extinction from climate change. Before, we worried about nuclear winter or overpopulation. There was peak oil. There was and still is the threat of deadly pandemics. Self-replicating nanobots. The inversion of the magnetic poles, the explosion of a supervolcano, a disruption of the sea currents leading to a global freeze. And most religions, cyclical or not, have their eschatological version of the End Times: the Last Judgment, Ragnarök, the Rapture, the dissolution of the universe in Hinduism, the ultimate destruction of the world by the Seven Suns in Buddhism. Some guy named William Miller once convinced millions of people that the Second Coming of Christ was scheduled for 1844, and when that didn’t happen, they called it the Great Disappointment.
We sure seem to often think that The End Is Near. We think it at a much higher rate than the actual rate of End Times happening.
And to be sure, End Times do occasionally happen. Some societies have been destroyed, or at least greatly damaged, by natural disasters (like possibly the Minoan civilization) or ecological collapse (e.g. Easter Island, maybe the Maya, maybe Norse Greenland). Past versions of the biosphere, like the dinosaur-dominated Earth of the Cretaceous, or the anaerobic microbes before the Great Oxidation Event, have been catastrophically wiped out and replaced by new biospheres that grew out of the few survivors. And there have been plenty of non-catastrophic, but still bad, disasters, like WWII and the covid pandemic. So it’s not an unreasonable prediction to say that something could happen. We can’t just dismiss doomerism out of hand.
But it is useful to remember that predictions of the end of the world virtually never realize, at least the ones on the scale of human lives. There seems to be something in our psychology that make End Times seem waaaay more likely than they actually are. What might that be?
I propose that the answer is a cognitive bias in favor of simplistic scenarios.
We talked about something similar in my post on worldbuilding, and then my post on dystopia. When writers imagine fictional stories, they necessarily set them either in our world, or in something close to it that borrows its intricateness, or in fictional worlds that end up being very simple once you dig past the illusion of complexity. They do this not because they choose to, but because it is virtually impossible for a single human brain to come up with a world whose complexity gets even close to a meaningful fraction of ours.
Predictions about the future are also a type of worldbuilding. They are about imagining the state of our world in X years. But of course they have an important additional constraint that fiction doesn’t have: a good prediction should, given the present, be plausible.1
How can we make a plausible prediction? A good bet would be that the world X years from now will be just as complex as ours, or more. After all, our world is at least complex as it was X years ago. There are more people, more ideas, more artifacts.2 However, good luck making a prediction that plausibly takes this into account!
Imagine someone from X years ago trying to predict 2024; it’s possible they’d get some things broadly right, but they’d also get a lot wrong, because the world is dynamic and chaotic and moves through the combined action of billions of people (as well as other living beings and natural phenomena). For example, one thing the people from X years ago basically always got wrong is the aesthetics; so it’s a good bet that our predictions for the dominant aesthetics of the world X years from now are also wrong.
So, given a sufficiently large X, and a sufficiently large resolution, most predictions about the world X years from now aren’t particularly plausible. Most of us know this perfectly well, so we restrain from making serious, confident predictions about specific details like the fate of this or that country, or the emergence of certain technology, or fashion. We’ll make rough extrapolations, and we’ll talk in general, probabilistic terms, but for anything more precise, we’ll abstain. We know the world is too complex.
But this is an uncomfortable position to be in. We actually want to know the future! It’s stressful to have no idea what we’re headed towards.
So instead we make simplistic scenarios. Predictions that can fully fit into our brains. However, most such scenarios are not plausible — for example, a scenario of perfect stagnation, where nothing substantial changes anymore — so we reject almost all of them outright. All that’s left, then, is predictions of doom.
Doom is easy to conceptualize. Just imagine the entire Earth as a barren field of rock. Molten rock, if you want. Imagine everything dead, nothing moving except through mechanistic geological and atmospheric processes. No fashion, commerce, infrastructure, governance, or culture. Tranquility, forever.
Or you could just imagine an absence of humans. That’s also easy. There were no humans for most of the Earth’s history, and there are plenty of places today where no humans live. Vast deserts, continents of ice, the top of high mountains, the bottom of the oceans. A world with life is more complex than a barren planet, but it’s far less complex than a world with humans in it.
These future scenarios are not only simple to imagine, but they’re also plausible. We could go extinct. The Earth could become a barren rock, just like the Moon, or Mars, or any of the other solid objects of our solar system. There are plenty of real examples, and plenty of realistic scenarios that bring us there, whether that’s technological disasters, ecological crises, religious prophecies (if you’re so inclined).3
Of course, that’s not to say that any given prediction of doom is likely. In fact many of them are totally impossible (all the religious prophecies!) and the rest are rather improbable when taken individually, though of course there’s healthy debate on the odds for things like rogue AI and climate change. But just by virtue of being both plausible and simple, which almost nothing else manages to, we are biased to think about them a lot.
One wonders of all that thinking actually changes the odds. I could imagine that it could help us design ways to avoid doom or prepare against it. But also, sometimes when you think too much about something you don’t want, you end up getting it. Maybe that’s why I’ve grown sympathetic to the anti-doomer stance over the years.
Thanks to Matt Popovich for the tweet that prompted this essay.
Technically the desired quality is that the prediction is correct, but we can’t know this at the moment of predicting. And also there can be value in making predictions without worrying about truth. So plausibility is a good proxy.
There’s no reason to think this holds for any given pair of points. The world can become less complex over a given period of time. But it seems very likely that we’re currently at the pinnacle of complexity (so far), if only because of our large population.
Religious prophecies aren’t always predicting doom, but the “good” End Times scenarios are also biased towards simplicity. Often they just assume that God will take care of everything at once, like in a judgment day scenario—a binary decision for each soul, and then an eternity of unspecified, presumably simple, bliss or punishment.
We write of what we know, we know death and decay, we do not know of eternity and forever growth.
It makes sense how "sometimes when you think too much about something you don’t want, you end up getting it" applies with regard to psychological things like habits or emotions, such as the examples in the tweet thread you linked, but it's not clear to me how this would concretely apply to doom scenarios such as AI.