11 Comments

1) Ozy Brennan of Thing of Things has mentioned the “high-intensity community” as a good concept for what-we-mean by cults: monasteries, militaries, some EA orgs, small Leninist vanguard orgs, and central cult examples like Heaven’s Gate and others are high-intensity. Notably, you can coherently claim a particular HIC serves a valuable purpose (or not, while in either case have known ways of going off the rails) and it’s relatively common for the same ideology or worldview to have both more and less demanding milieux.

2) Nitpick, but the post in postrationalism is much more like the post in postmodernism than the ex in ex-Mormon.

3) I don’t know that the pre-existence of rationalist responses to the cult objection are Bayesian evidence against their being so (though I agree rationalism isn’t a cult!), since actual cults frequently have standard response to why they aren’t cults.

4) Really I think a lot of this is - zooming out from cults and rationalism - an instance of the more general phenomenon that good arguments against weird positions are hard to come by, since it’s so tempting for people to fall back on “that’s weird.” If you’re considering something weird you have to be your own red team since often no one else will!

Expand full comment

Thanks, these are all good points. As one of them (sometimes), I do think that "post-rationalists" include a fair amount of actual ex-rationalists of some kinds, though it's true that the subculture then took a life of its own. As for your point 3, I suppose the difference is that I believe the responses, which is another way of saying that the content of the arguments and explanations matters.

Expand full comment

Cultic aspects, are to me, irrelevant, as are personality, credentials, etc. A cult can state true propositions as can evil people. Yud's credentials are those of an autodidact who has very little coding skills or sophisticated logic or mathematical fluency, yet those lacks don't make anything he says useless. He is in my opinion, a public intellectual of the kind we often see here and abroad. A philosopher at heart (one of my favorite professions) currently specializing in the many intriguing questions posed by AI. More power to him! However I do have a caveat. That his grasp of AI may be shallower than the scientists who toil out of the public spotlight on the astronomic complexities of coding the LLMs and who work under the hood of the engine so to speak. Yud may be in the publicity department of the AI alignment factory, but I trust more the folks down on the shop floor.

Expand full comment

Pretty much agreed, though we also have to consider the incentives that people working to advance AI have to say that advancing AI is a good thing.

Expand full comment

Totally agree. And I want to add to my encomiums of Yud that like any true philosopher (I include you in that elite, Étienne), Yud is asking the right questions and thus steering the dialogue in what are very fruitful directions. More power to him!

Expand full comment
Comment deleted
April 29, 2023
Comment deleted
Expand full comment

Good points all. Credentialism is indeed a sorry state of affairs. But sometimes they, on an individual basis, are useful. I want a board certified neurosurgeon to do my tumor removal, not an HVAC technician... Mathematics, etc. can limit, but they also can help the mind address problems. They do more good than harm in my opinion and we are all their beneficiaries. Try flying in a passenger jet that the aeronautical engineers didn't do long hours working out the diff eq math for load bearing, laminar flow etc. We live in a world shaped by math, probability, etc. Why say they are an impediment? If you don't like them, than simply don't study them. You're the boss, not them.

Expand full comment

Refuting Yud is easy, but people don't like the result. The entire doomer argument is that an unaligned super-super intelligence will view humanity as humanity views particularly invasive ants (at best), thus will eliminate us. They argue that to avoid this catastrophe, we need to align AI with human interests (the "alignment problem"). But, actually, aligning an AI with human interests would naturally and inescapably result in a super-super intelligent AI that will view humanity as humanity views particularly invasive ants. "But no! Alignment means to instill AI with the same values humanity has about humanity!" Cue me, waving hands around at our entire f*king history.

Alignment is a terrible idea and will absolutely result in the doom that Yud et al envision, ironically. I have now completely refuted Yud. Now what?

The thing people don't like about this is not knowing the future. We cannot understand what a super-super intelligent AI will think, much less think of us. There is some argument to be made that super-smart people do tend to lean towards compassion, as long as they aren't sociopaths, but eh, maybe? We have no more way of knowing if a super-intelligent AI will consider us vermin than we do that it will consider us a precious, adorable protected species.

The doomer argument boils down to "we have no way of knowing how a super-intelligent AI will think but I, personally, have figured it out based on my misanthropy." That's really it. But the other side of the coin is, "no human being, no matter how smart, can predict what a sentient super-intelligent AI thinks because it will be orders of magnitude smarter than the smartest human. It's a black box." And that scares doomers more than guaranteed doom.

If you haven't run across David Shapiro's Heuristic Imperatives yet, it's worth looking for.

Expand full comment

...which is all my way of saying, I agree with you, and it's super frustrating that people focus on the "cultishness" and the fedora rather than make a very simple logical argument...but they don't want that, they want reassurance that there is no way a doomer scenario can happen. The fact is it can, and we can only do very little about it.

Expand full comment

Yeah. It all comes from a place of just refusing that the arguments might be true. I tentatively agree with you that alignment as commonly stated might be a bad idea, and I really don't know what to do with this information.

Expand full comment

Thanks! I thoroughly enjoyed that. What a big topic! Sends me into the occulted hidden mysteries of the pineal ocular cult of Pinocchio:). He just wanted to be a real boy not the entity…https://open.substack.com/pub/sinatana/p/identify-yourself_its-required?r=zickz&utm_campaign=post&utm_medium=web

👆🏻What happens when the world reverts to its carnal material nature, the culture of the embodied avatar?

Expand full comment

Nailed it.

I think I'm pretty much right in line with you about the rationalists and their so-called movement. I hope Yud's wrong, my gut says there's a sub-50% chance he's right (which is, emphatically, still very scary), and I think EA is basically a modern reboot of classic utilitarianism, which I strongly believe is a spiritually bankrupt philosophy that is obsessed with the quantification of morality. So: not a fan.

But I like rationalism a lot. I think it's an invaluable counterbalance to the shrieking partisanship that has devoured nearly all of the rest of public discourse. Will it "win"? It depends on whether society is capable of truly capturing emergent value. Or will all the progress the rationalists have made just cycle away, like everything else, dust in the wind.

Expand full comment