Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Last substack for 2025 - may 2026 bring better tidings. Credit and/or blame to David Gerard for starting this.)
This is a fun read: https://nesbitt.io/2025/12/27/how-to-ruin-all-of-package-management.html
Starts out strong:
Prediction markets are supposed to be hard to manipulate because manipulation is expensive and the market corrects. This assumes you can’t cheaply manufacture the underlying reality. In package management, you can. The entire npm registry runs on trust and free API calls.
And ends well, too.
The difference is that humans might notice something feels off. A developer might pause at a package with 10,000 stars but three commits and no issues. An AI agent running npm install won’t hesitate. It’s pattern-matching, not evaluating.
Foz Meadows brings a lengthy and merciless sneer straight from the heart, aptly-titled “Against AI”
Rich Hickey joins the list of people annoyed by the recent Xmas AI mass spam campaign: https://gist.github.com/richhickey/ea94e3741ff0a4e3af55b9fe6287887f
LOL @ promptfondlers in comments
It’s a treasure trove of hilariously bad takes.
There’s nothing intrinsically valuable about art requiring a lot of work to be produced. It’s better that we can do it with a prompt now in 5 seconds
Now I need some eye bleach. I can’t tell anymore if they are trolling or their brains are fully rotten.
Don’t forget the other comment saying that if you hate AI, you’re just “vice-signalling” and “telegraphing your incuruosity (sic) far and wide”. AI is just like computer graphics in the 1960s, apparently. We’re still in early days guys, we’ve only invested trillions of dollars into this and stolen the collective works of everyone on the internet, and we don’t have any better ideas than throwing more
moneycompute at the problem! The scaling is still working guys, look at these benchmarks that we totally didn’t pay for. Look at these models doing mathematical reasoning. Actually don’t look at those, you can’t see them because they’re proprietary and live in Canada.In other news, I drew a chart the other day, and I can confidently predict that my newborn baby is on track to weigh 10 trillion pounds by age 10.
EDIT: Rich Hickey has now disabled comments. Fair enough, arguing with promptfondlers is a waste of time and sanity.
these fucking people: “art is when picture matches words in little card next to picture”
Cory’s talk on 39C3 was fucking glorious: https://media.ccc.de/v/39c3-a-post-american-enshittification-resistant-internet
No notes
lowkey disappointed to see so much slop in other talks (illustrations on slides mostly)
A few weeks ago, David Gerard found this blog post with a LessWrong post from 2024 where a staffer frets that:
Open Phil generally seems to be avoiding funding anything that might have unacceptable reputational costs for Dustin Moskovitz. Importantly, Open Phil cannot make grants through Good Ventures to projects involved in almost any amount of “rationality community building”
So keep whisteblowing and sneering, its working.
Sailor Sega S found a deleted post on https://forum.effectivealtruism.org/users/dustin-moskovitz-1 where Moskovitz says that he has moral concerns with the Effective Altruism / Rationalist movement not reputation concerns
All of the bits I quoted in my other comment were captured by archive.org FWIW: a, b, c. They can also all still be found as EA forum comments via websearch, but under [anonymous] instead of a username.
This newer archive also captures two comments written since then. Notably there’s a DOGE mention:
But I can’t e.g. get SBF to not do podcasts nor stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID. (On Bsky, they blame EAs for the whole endeavor)
The February 2024 Medium post by Moskovitz objects to cognitive decoupling as an excuse to explore eugenics and says that Eliezer Yudkowsky seems unreasonably confident in immanent AI doom. It also notes that Utilitarianism can lead ugly places such as longtermism and Derek Parfit’s repugnant conclusion. In the comments he mentions no longer being convinced that its as useful to spend on insect welfare as on “chicken, cow, or pig welfare.” He quotes Julia Galef several times. A choice quote from his comments on forum.effectivealtruism.org:
If the (Effective Altruism?) brand wasn’t so toxic, maybe you wouldn’t have just one foundation like us to negotiate with, after 20 years?
Does anyone have an explainer on the supposed DOGE/EA connection? All I can find is this dude with a blo wobbling back and forth with LessWrong flavoured language https://www.statecraft.pub/p/50-thoughts-on-doge (he quotes Venkatesh Rao and Dwarkesh Patel who are part of the LessWrong Expanded Universe).
The bluesky reference may be about this thread & this thread.
One of the replies names Cole Killian as an EA involved with DOGE. The image is dead but has alt text.
I mean there’s at least one. You could “no-true-scotsman” him, but between completing an EA fellowship and going vegan, he seems to fit a type. [A vertical screenshot of an archive.org snapshot of Cole Killian’s website, stating accomplishments. Included in the list are “completed the McGill effective altruism fellowship” and “went vegan and improved cooking skills”]
(It looks like that archive has since been scrubbed, though Rolling Stone also mentions the connection)
Two of the bsky posts are log-in only. Huh, Killian is in to Decentralized Autonomous Organizations (blockchain), high-frequency trading (like our friends at Jane Street), veganism, and Effective Altruism?
Here’s another interesting quote from the now deleted webpage archive: https://old.reddit.com/r/mcgill/comments/1igep4h/comment/masajbg/
My name is Cole. Here’s some quick info. Memetics adjacence:
Previously - utilitarianism, effective altruism, rationalism, closed individualism
Recently - absurdism, pyrrhonian skepticism, meta rationalism, empty individualism





