Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)


A few weeks ago, David Gerard found this blog post with a LessWrong post from 2024 where a staffer frets that:
So keep whisteblowing and sneering, its working.
Sailor Sega S found a deleted post on https://forum.effectivealtruism.org/users/dustin-moskovitz-1 where Moskovitz says that he has moral concerns with the Effective Altruism / Rationalist movement not reputation concerns
All of the bits I quoted in my other comment were captured by archive.org FWIW: a, b, c. They can also all still be found as EA forum comments via websearch, but under [anonymous] instead of a username.
This newer archive also captures two comments written since then. Notably there’s a DOGE mention:
The February 2024 Medium post by Moskovitz objects to cognitive decoupling as an excuse to explore eugenics and says that Eliezer Yudkowsky seems unreasonably confident in immanent AI doom. It also notes that Utilitarianism can lead ugly places such as longtermism and Derek Parfit’s repugnant conclusion. In the comments he mentions no longer being convinced that its as useful to spend on insect welfare as on “chicken, cow, or pig welfare.” He quotes Julia Galef several times. A choice quote from his comments on forum.effectivealtruism.org:
Does anyone have an explainer on the supposed DOGE/EA connection? All I can find is this dude with a blo wobbling back and forth with LessWrong flavoured language https://www.statecraft.pub/p/50-thoughts-on-doge (he quotes Venkatesh Rao and Dwarkesh Patel who are part of the LessWrong Expanded Universe).
The bluesky reference may be about this thread & this thread.
One of the replies names Cole Killian as an EA involved with DOGE. The image is dead but has alt text.
(It looks like that archive has since been scrubbed, though Rolling Stone also mentions the connection)
Two of the bsky posts are log-in only. Huh, Killian is in to Decentralized Autonomous Organizations (blockchain), high-frequency trading (like our friends at Jane Street), veganism, and Effective Altruism?
Here’s another interesting quote from the now deleted webpage archive: https://old.reddit.com/r/mcgill/comments/1igep4h/comment/masajbg/