

It’s in the quote tweets.


It’s in the quote tweets.


And apparently, one of their FOUNDING BELIEFS, is that I had sex with somebody underage (mutually desired sex, according to the Zizians)… and then MIRI, a nonprofit I started, paid money (to a third-party extorter) to hush that up… which payment, according to the Zizians, is in violation of DECISION THEORY… and, therefore, for THAT EXACT REASON (like specifically the decision theory part), everything believed by those normie rationalists who once befriended them is IRRETRIEVABLY TAINTED… and therefore, the whole world is a lie and dishonest… and from this and OTHER PREMISES they recruit people to join their cult.
Yudkowsky is the first person I have ever seen describe this as a load-bearing belief of the Zizians. Offhand, I don’t recall the news stories about the murders even mentioning it.


Or maybe society would run a prediction market about whether ten years later the 24-year-old would think that it was a terrible terrible idea for them to have microdosed LSD as a kid. If society’s rules were that sensible
Wha’the fuuuuuck


The problem with writing a Harry Potter fanfic as your cult recruitment tool is that you end up having written a Harry Potter fanfic as your cult recruitment tool.


Also appearing is friend of the pod and OpenAI board member Larry Summers!
The emails have Summers reporting to Epstein about his attempts to date a Harvard economics student & to hit on her during a seminar she was giving.
https://bsky.app/profile/econmarshall.bsky.social/post/3m5p6dgmagb2a
To quote myself: Larry Summers was one of the few people I’ve ever met where a casual conversation made me want to take a shower immediately afterward. I crashed a Harvard social event when a friend was an undergrad there and I was a student at MIT, in order to get the free food, and he was there to do glad-handing in his role as university president. I had a sharp discomfort response at the lizard-brain level — a deep part of me going on the alert, signaling “this man is not to be trusted” in the way one might sense that there is rotten meat nearby.


I still say that the term “scientific racism” gives these fuckos too much credit. I’ve been saying “numberwang racism” instead.


The best executives are very strong generalists, and the best managers are chosen to be strong performers in the task that they are managing other people to do. Elon is widely known to be a strong engineer, as well as a strong designer, and spends much of his time arguing details of that kind of work with his reports. Mark Zuckerberg is known to do similarly.
LO fuckin’ L


I’d go for Motoko Kusanagi’s prosthetic body, myself, as long as I could afford the upkeep. That whole “don’t darken your Soul Gem” thing would go terribly for me.


Whatever marginal utility genAI has in mathematics, like being a shitty version of a semantic search engine, is outweighed by the damage it is doing to critical thought at large. “Ooh, the bus to the math conference runs so smoothly on this leaded gasoline!”


Reasoning With Machines doesn’t work on reasoning, really. It’s almost entirely large language models — chatbots. Because that’s where the money — sorry, the industry interest — is. But this paper got into the NeurIPS 2025 conference.
A reminder that the NeurIPS FAQ for reviewers says that “interactions with LLMs” are an acceptable way “to enhance your understanding of certain concepts”. What other big conferences are there… AAAI 2026, you say?
AAAI-26 will follow a two-phase reviewing process as in previous years, with two additions: an additional AI-generated review in Phase 1, and an AI-generated summary of the discussions at the end of the discussion phase. The AI-generated content is being used as part of a pilot program to evaluate the ability of AI tools to assist in the peer review process.
I’m gonna say it: The entire “artificial intelligence”/“machine learning” research field is corrupt. They have institutionally accepted the bullshit fountain as a tool. It doesn’t matter if they’re only using chatbots as a “pilot program”; they’ve bought into the ideology. They’ve granted fashtech a seat at the bar and forced all the other customers to shake its hand.


Was there ever, like, a push by Falun Gong to whitewash their articles? I seem to recall gossip from somebody (maybe in a skeptics’ group) about that, but I have no idea where in Wikipedia’s deep drama holes to look for evidence of it.


How do you write like this?
The first step is not to have an editor. The second step is to marinate for nearly two decades in a cult growth medium that venerates you for not having an editor.


Hasn’t Falun Gong had beef with Wikipedia for a long time? I have a vague recollection of reading about that, but I do not know where.


The only nice feeling here is that of every joke we science students made about the management school being validated.


Goertzel is a fan of Chris Langan.


muted colors
A lot of it looks like it was pissed on.
There is a mention of something that might be what Yudkowsky is on about in this Wired story:
This article doesn’t make it sound so much like a “FOUNDING BELIEF”; lots of weird shit like the brain hemispheres business appears to have come first. But the much more interesting thing is at the end of the story:
(Archive link to Ziz’s blog)
Hmm. Hm-hmmm.