11 Comments

1) Ozy Brennan of Thing of Things has mentioned the “high-intensity community” as a good concept for what-we-mean by cults: monasteries, militaries, some EA orgs, small Leninist vanguard orgs, and central cult examples like Heaven’s Gate and others are high-intensity. Notably, you can coherently claim a particular HIC serves a valuable purpose (or not, while in either case have known ways of going off the rails) and it’s relatively common for the same ideology or worldview to have both more and less demanding milieux.

2) Nitpick, but the post in postrationalism is much more like the post in postmodernism than the ex in ex-Mormon.

3) I don’t know that the pre-existence of rationalist responses to the cult objection are Bayesian evidence against their being so (though I agree rationalism isn’t a cult!), since actual cults frequently have standard response to why they aren’t cults.

4) Really I think a lot of this is - zooming out from cults and rationalism - an instance of the more general phenomenon that good arguments against weird positions are hard to come by, since it’s so tempting for people to fall back on “that’s weird.” If you’re considering something weird you have to be your own red team since often no one else will!

Expand full comment
Apr 27, 2023Liked by Étienne Fortier-Dubois

Cultic aspects, are to me, irrelevant, as are personality, credentials, etc. A cult can state true propositions as can evil people. Yud's credentials are those of an autodidact who has very little coding skills or sophisticated logic or mathematical fluency, yet those lacks don't make anything he says useless. He is in my opinion, a public intellectual of the kind we often see here and abroad. A philosopher at heart (one of my favorite professions) currently specializing in the many intriguing questions posed by AI. More power to him! However I do have a caveat. That his grasp of AI may be shallower than the scientists who toil out of the public spotlight on the astronomic complexities of coding the LLMs and who work under the hood of the engine so to speak. Yud may be in the publicity department of the AI alignment factory, but I trust more the folks down on the shop floor.

Expand full comment

Refuting Yud is easy, but people don't like the result. The entire doomer argument is that an unaligned super-super intelligence will view humanity as humanity views particularly invasive ants (at best), thus will eliminate us. They argue that to avoid this catastrophe, we need to align AI with human interests (the "alignment problem"). But, actually, aligning an AI with human interests would naturally and inescapably result in a super-super intelligent AI that will view humanity as humanity views particularly invasive ants. "But no! Alignment means to instill AI with the same values humanity has about humanity!" Cue me, waving hands around at our entire f*king history.

Alignment is a terrible idea and will absolutely result in the doom that Yud et al envision, ironically. I have now completely refuted Yud. Now what?

The thing people don't like about this is not knowing the future. We cannot understand what a super-super intelligent AI will think, much less think of us. There is some argument to be made that super-smart people do tend to lean towards compassion, as long as they aren't sociopaths, but eh, maybe? We have no more way of knowing if a super-intelligent AI will consider us vermin than we do that it will consider us a precious, adorable protected species.

The doomer argument boils down to "we have no way of knowing how a super-intelligent AI will think but I, personally, have figured it out based on my misanthropy." That's really it. But the other side of the coin is, "no human being, no matter how smart, can predict what a sentient super-intelligent AI thinks because it will be orders of magnitude smarter than the smartest human. It's a black box." And that scares doomers more than guaranteed doom.

If you haven't run across David Shapiro's Heuristic Imperatives yet, it's worth looking for.

Expand full comment

Thanks! I thoroughly enjoyed that. What a big topic! Sends me into the occulted hidden mysteries of the pineal ocular cult of Pinocchio:). He just wanted to be a real boy not the entity…https://open.substack.com/pub/sinatana/p/identify-yourself_its-required?r=zickz&utm_campaign=post&utm_medium=web

👆🏻What happens when the world reverts to its carnal material nature, the culture of the embodied avatar?

Expand full comment

Nailed it.

I think I'm pretty much right in line with you about the rationalists and their so-called movement. I hope Yud's wrong, my gut says there's a sub-50% chance he's right (which is, emphatically, still very scary), and I think EA is basically a modern reboot of classic utilitarianism, which I strongly believe is a spiritually bankrupt philosophy that is obsessed with the quantification of morality. So: not a fan.

But I like rationalism a lot. I think it's an invaluable counterbalance to the shrieking partisanship that has devoured nearly all of the rest of public discourse. Will it "win"? It depends on whether society is capable of truly capturing emergent value. Or will all the progress the rationalists have made just cycle away, like everything else, dust in the wind.

Expand full comment