Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
TIL some rats have started a literal monastery to try to defeat the robot god with good ole religion (well, Zen buddhism)
here’s a mildly critical view that apparently still believes the approach has legs
https://www.lesswrong.com/posts/ENCNHyNEgvz9oo9rr/briefly-on-maple-and-the-broader-community
I note in passing that there seems to be a mild upsurge in religious-friendly posts on LW lately.
That opening has a strong ‘omg what the fuck happened there, and why are you still friends with these people if it is that bad’ vibes.
Lots of ex Maplers I’ve talked to are variously angry, some (newly) traumatized, confused, etc., but the vibe has generally been "gosh it’s fucking complicated
Lot of people told me they were mad, but I got the vibe it was complicated. lol what…
The product of monasteries is saints, at least in small quantities.
Wrong, it is beer.
I haven’t seen anti-safetyist arguments that actually address the technical claims made by Eliezer etc.
I agree with him there. But hard to make arguments against something which doesn’t exist. ;)
it will take some very unusual kind of virtue and skill
Love how we went from ‘you need to learn rationality, and be aware of your biasses’ to you need to have virtue. And ignore the screaming in the backgrounds, that is just the academics who studied ethics doing their normal thing again.
Anyway the rest gets pretty dark pretty quickly, and I just see red flags (for people who believe in data this devolves very quickly in just going ‘people are prob better off due to this, the new trauma doesn’t count because of preexisting conditions, consent and trauma always happening’). And this was the article they wrote not wanting to harm the project.
Wait one more remark:
Furthermore I think that probably with the exception of the one actual AI researcher there, people at Maple basically don’t understand what AI is
Hahahaha, perfect.
It also reminds me of the interview with Metz where they got mad Metz used religious terms.
Announcing my sneerclub follow up to MAPLE: “Man, All These Losers Are Bonkers” aka MATLAB
Deep cut, I love it!
Michael Hiltzik in LATimes: “Say farewell to the AI bubble, and get ready for the crash”
Fun quote:
The rest of [AI 2027], mapping a course to late 2027 when an AI agent “finally understands its own cognition,” is so loopily over the top that I wondered whether it wasn’t meant as a parody of excessive AI hype. I asked its creators if that was so, but haven’t received a reply.
And because it’s the LA Times, there’s a chatbot slop section at the bottom to provide false balance.
I’m enjoying the mood today. We’re all looking for what the next Big Dumb Thing will be that we’ll be dunking on next year, like we’re browsing the dessert menu at a fancy restaurant.
On top of that, there’s clear signs that we’ve grown quite an audience from dunking on AI. Ed Zitron reached 70k subscribers just a couple weeks ago, and Pivot to AI is at nearly 9k on YouTube.
If and when the next Big Dumb Thing comes along, chances are we’re gonna have a headstart against the hucksters.
In other news, bodhidave reported a case of Google AI and ChatGPT making identical citation fuck-ups:
The true meaning of “pooping back and forth forever”
sick reference. I don’t even know how I knew this.
Even if they aren’t actively relying on each other here I would assume that we’re reaching a stage where all of the competing LLMs are using basically the entire Internet as their training data, and while there is going to be some difference based on the reinforcement learning process there’s still going to be a lot of convergence there.
Plus, there’s the hefty amount of AI slop that’s been shat onto the Internet over the years, plus active attempts to sabotage LLM datasets through tarpits like Iocaine and Nepenthes, and media-poisoning tools like Glaze and Nightshade.
So, if and when model collapse fully sets in, its gonna hit all of them at once. Given that freshly trained LLMs are gonna be effectively stillborn, if ChatGPT et al. collapse, it’ll likely kill LLMs as a tech for at least the next ten years.
Ed Zitron’s planning to hold AI boosters to account:
Well if the bubble pops he will have to pivot to people who pivot. (That is what is going to suck when to bubble pops, so many people are going to lose their jobs, and I fear a lot of people holding the bags are not the ones who really should be punished the mosts (really hope not a lot of pension funds bought in). The stock market was a mistake).
I imagine it’ll be a pretty lucrative pivot - the public’s ravenous to see AI bros and hypesters get humiliated, and Zitron can provide that in spades.
Plus, he’ll have a major headstart on whatever bubble the hucksters attempt to inflate next.
the hucksters attempt to inflate next.
Quantum, it has already started: https://www.schneier.com/blog/archives/2025/07/cheating-on-quantum-computing-benchmarks.html
Y’know, I was predicting at least a few years without a tech bubble, but I guess I was dead wrong on that. Part of me suspects the hucksters are gonna fail to inflate a quantum bubble this time around, though.
Quantum computing is still too far out from having even a niche industrial application, let alone something you can sell to middle managers the world over. Anybody who day-traded could get into Bitcoin; millions of people can type questions at a chatbot. Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.
Hucksters can and will reinvent themselves as quantum-computing consultants on LinkedIn, but is the raw material for the grift really there? I’m doubtful.
By my guess, no. AI earned its investor/VC dollars by providing bosses and CEOs alike a cudgel to use against labour, either by deskilling workers, degrading their work conditions, or killing their jobs outright.
Quantum doesn’t really have that - the only Big Claim™ I know it has going for it is its supposed ability to break pre-existing encryption clean in half, but that’s near-certainly gonna be useless for hypebuilding.
New Atlantic article regarding AI, titled “AI Is a Mass-Delusion Event”. Its primarily about the author’s feelings of confusion and anxiety about the general clusterfuck that is the bubble.
better, or equivalent to, a mass defecation event?
Found a solution to the Fermi paradox, and solved the problem of all the ‘dark matter’, any advanced society just puts a dyson sphere around their galaxy, that is why we can’t see or hear from them.
(Yes, this is a subsneer for the silly Altman remark. The whole solar system not just the Sun (I do support walling off The Sun)).
DAE remember when neel armstrong invented nasa and said his famous quote “That’s one small step for man, ok let’s fill this motherfucker up with GPUs”
slaps moon You can make so much computronium from these atoms.
E: shit no that is wrong, I used the term computronium correctly, if it was Altman he would have called it unobtanium
Skip the unobtainium I say, let’s just implement the human instrumentality project and tangify us all into a collective goo.
Because of course why have a data ~~center~~ when you can have an ecumenskatasphaira.
The usual suspects are mad about college hill’s expose of the yud/kelsey piper eugenics sex rp. Or something, I’m in bed and can’t be bothered to link at the moment.
I’m sorry, we finally, officially need to cancel fantasy TTRPGs. If it’s not the implicit racialization of everything, it’s the use of the stat systems as a framework for literally masturbatory eugenics fetishization.
You all can keep a stripped-down version of Starfinder as a treat. But if I see any more of this, we’re going all the way back to Star Wars d6 and that’s final.
also: The int-maxxing and overinflated ego of it all reminds me of the red mage from 8-bit theater, a webcomic based on final fantasy about the LW (light warriors) that ran from 2001-2010
E: thinking back on it, reading this webcomic and seeing this character probably in some part inoculated me against people like yud without me knowing
I never read 8bit. I read A Modest Destiny. Wonder how that guy is doing, he always was a bit weird and combative, but when he deleted his blog it was getting very early signs of right wing culture warrior bits (which was ironic considering he burned a us flag).
Never read AMD (and shan’t). The author’s site appears to be live.
8BF’s site has been taken over by bots, and I can’t be bothered to find an alternate source. Dead internet go brrrrr. Otherwise, the creator, Brian Clevinger, appears to have had a long career in comics, and has written many things for Marvel.
Yeah, but he used to have forums, and then a blog, and then no blog and then a blog again, and then a hidden blog etc. Think Howard has only a few minor credits on some games, he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example. One of his comics was also called ‘the atheist, the agnostic and the asshole’ so yeah. The 00’s online comic world was something.
has only a few minor credits[…], he always came off as a bit of a weirdly combative nerd who thought he was right and the smartest in the room and didn’t get that people didn’t agree with his definitions/assumptions. He is a big idea guy for example.
gosh i’m sure glad that these kinds of people disappeared from the internet /s
8BF’s site has been taken over by bots, and I can’t be bothered to find an alternate source.
You can find it directly on Brian Clevinger’s blog, Nuklear Power. Here’s a direct link to the archive.
Ah thanks! On mobile the main page gets redirected to spam, but the site is navigable from the archive.
To be fair to DnD, it is actually more sophisticated than the IQ fetishists, it has 3 stats for mental traits instead of 1!
I would simply learn how to keep “games” and “reality” separate. I actually already know. It helps a lot.
Racists are gonna racist no matter what. They didn’t need TTRPGs around to give them the idea of breaking out the calipers.
Yes but basic dnd does have a lot of racism build in, esp with Gygax not being great on that end (nits make lice he said about how it lawful for paladins to kill orc babies). They did drop the sexism pretty quickly, but no big suprise his daughters were not into it. It certainly helps with the whole hierarchical mindset. My int/level is higher than yours so im better than you stuff. And sadly a lot of people do have trouble keeping both seperate (and even that isn’t always ideal, esp in larps).
But yes this, considering the context ks def a bit of a case of some of their ideologies, or ideological fantasies bleeding through. (Esp considering, Yud has been corrected on his faulty understanding of genetics before).
Anyone found with a non-cube platonic solid will be lockerized indefinitely
Is the scoop that besides being an EA mouthpiece KP is also into the weird stuff?
Weird rp wouldn’t be sneer worthy on it’s own (although it would still be at least a little cringe), it’s contributing factors like…
-
the constant IQ fetishism (Int is superior to Charisma but tied with Wis and obviously a true IQ score would be both Int and Wis)
-
the fact that Eliezer cites it like serious academic writing (he’s literally mentioned it to Yann LeCunn in twitter arguments)
-
the fact that in-character lectures are the only place Eliezer has written up many of his decision theory takes he developed after the sequences (afaik, maybe he has some obscure content that never made it to lesswrong)
-
the fact that Eliezer think it’s another HPMOR-level masterpiece (despite how wordy it is, HPMOR is much more readable, even authors and fans of glowfic usually acknowledge the format can be awkward to read and most glowfics require huge amounts of context to follow)
-
the fact that the story doubles down on the HPMOR flaw of confusion of which characters are supposed to be author mouthpieces (putting your polemics into the mouths of character’s working for literal Hell… is certainly an authorial choice)
-
and the continued worldbuilding development of dath ilan, the rationalist utopia built on eugenics and censorship of all history (even the Hell state was impressed!)
…At least lintamande has the commonsense understanding of why you avoid actively linking your bdsm dnd roleplay to your irl name and work.
And it shouldn’t be news to people that KP supports eugenics given her defense of Scott Alexander or comments about super babies, but possibly it is and headliner of weird roleplay will draw attention to it.
That’s about what I was thinking, I’m completely ok with the weird rpg aspect.
Regarding the second and third point though I’ll admit I thought the whole thing was just yud indulging, I missed that it’s also explicitly meant as rationalist esoterica.
also explicitly meant as rationalist esoterica.
Always a bad sign when people can’t just let a thing be a thing just for enjoyment, but see everything as the ‘hustle’ (for lack of a better word). I’m reminded of that dating profile we looked at which showed that 99% what he did was related to AI and AI doomerism, even the parties.
obligatory reminder that “dath ilan” is misspelled “thailand” and I still don’t know why. Working theory is Yud wants to recolonise thailand
-
We’ve definitely sneered at this before, i do not recall if it was known that KP was the cowriter in this weird forum RP fic
E: googling “lintamande kelsey piper” and looking at a reddit post digs up the inactive since 2018 AO3. A total just shy of 130k words, a little marvel stuff, most of it LOTR based, and some of it tagged “Vladmir Putin/Sauron”. How fun!
No judgement from me, tbh. Fanfic be fanficking. I aint gonna read that shit tho.
For all of the 2.2 seconds I have spent wondering who Yud’s coauthor on that was, I vaguely thought that it was Aella. I don’t know where I might have gotten that impression from. A student paper about fanfiction identified “lintamande” as Kelsey Piper in 2013.
I tried reading the forum roleplay thing when it came up here, and I caromed off within a page. I made it through this:
The soap-bubble forcefield thing looks deliberate.
And I got to about here:
Mad Investor Chaos heads off, at a brisk heat-generating stride, in the direction of the smoke. It preserves optionality between targeting the possible building and targeting the force-bubble nearby.
… before the “what the fuck is this fucking shit?” intensified beyond my ability to care.
Yeah I couldn’t find the strength to even get to the naughty stuff, I gave up after one or two chapters. And I’ve read through all of HPMOR. 😐
I’m hard-pressed to think of anything else I have tried to read that was comparably impenetrable. At least when we played “exquisite corpse” parlor games on the high-school literary magazine staff, we didn’t pretend that anything we improvised had lasting value.
Previous thread
E: we didn’t fucking know
Not sure if anybody noticed the last time, but so they get isekayed into a DND world, which famously runs on some weird form of fantasy feudalism and they expect a random high int person to rule the country somehow? What in the primogenitor is this stuff, you can’t just think yourself into being a king, that is one of the issues with monarchies.
E: ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.
ah no they are in a totalitarian state ruled by the literal forces of hell, places that totally praise merit based upwards mobility.
Hey, write what you know
An encounter of this sort is what drove Lord Vetinari to make a scorpion pit for mimes, probably.
Okay so I know GPT-5 had a bad launch and has been getting raked over the coals, but AGI is totally still on, guys!
Why? Because trust me it’s definitely getting better behind the scenes in ways that we can’t see. Also China is still scary and we need to make sure we make the AI God that will kill us all before China does because reasons.
Also despite talking about a how much of the lack of progress is due to the consumer model and this is a cost-saving there’s no reference to the work of folks like Ed Zitron on how unprofitable these models are, much less the recent discussions on whether GPT-5 as a whole is actually cheaper to operate than earlier models given the changes it necessitates in caching.
Everyone agrees that the release of GPT-5 was botched. Everyone can also agree that the direct jump from GPT-4o and o3 to GPT-5 was not of similar size to the jump from GPT-3 to GPT-4, that it was not the direct quantum leap we were hoping for, and that the release was overhyped quite a bit.
a quantum leap might actually be accurate
Everyone can also agree that the direct jump from GPT-4o and o3 to GPT-5 was not of similar size to the jump from GPT-3 to GPT-4
Sure babe, you keep telling yourself that.
Our Very Good Friends are often likened to Scientology, but have we considered Happy Science and Aum Shinrikyo?
https://en.wikipedia.org/wiki/Happy_Science https://en.wikipedia.org/wiki/Aum_Shinrikyo
aum:
Advertising and recruitment activities, dubbed the “Aum Salvation plan”, included claims of […] realizing life goals by improving intelligence and positive thinking, and concentrating on what was important at the expense of leisure.
this is in common with both our very good friends and scientology, but i think happy science is much stupider and more in line with srinivasan’s network states, in that it has/is an explicitly far-right political organization built in from day one
Yeah, good point.
Network State def has that store-brand Team Rocket vibe.
Aum is very apt imo given how it recruited stem types.
aum recruited a lot of people, and also failed at some things that would be presumably easier to do safely than what they did
Meanwhile, Aum had also attempted to manufacture 1,000 assault rifles, but only completed one.[37]
otoh they were also straight up delusional about what they could achieve, including toying with the idea of manufacturing nukes, military gas lasers, and getting and launching Proton rocket. (not exactly grounded for a group of people who couldn’t make AK-74s)
they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they
had entire radio stationrented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)they were also more media savvy in that they didn’t pollute info space with their ideas only using blog posts, they had entire radio station rented time from a major radio station within russia, broadcasting both within freshly former soviet union and into japan from vladivostok (which was much bigger deal in 90s than today)
Its pretty telling about Our Good Friends’ media savviness that it took an all-consuming AI bubble and plenty of help from friends in high places to break into the mainstream.
radio transmissions in russia were money shot for aum, and idk if it was a fluke or deliberate strategy. people had for a long time expectation that radio and tv are authoritative, reliable sources (due to censorship that doubled as fact-checker, and about all of it was state-owned) and in 90s every bit of that broke down because of privatization, and now you could get on the air and say anything, with many taking that at face value, as long as you pay up. at the same time there was major economic crisis and cults prey on the desperate. result?
Following the sarin gas attack on the Tokyo subway, two Russian Duma committees began investigations of the Aum – the Committee on Religious Matters and the Committee on Security Matters. A report from the Security Committee states that the Aum’s followers numbered 35,000, with up to 55,000 laymen visiting the sect’s seminars sporadically. This contrasts sharply with the numbers in Japan which are 18,000 and 35,000 respectively. The Security Committee report also states that the Russian sect had 5,500 full-time monks who lived in Aum accommodations, usually housing donated by Aum followers. Russian Aum officials, themselves, claim that over 300 people a day attended services in Moscow. The official Russian Duma investigation into the Aum described the cult as a closed, centralized organization.
With all that money sloshing around, It’s only a matter of time before they start cribbing from their neighbors and we get an anime adaptation of HPMoR.
And how it fused Buddhism with more Christian religions. Considering how often you heard of old hackers being interested in the former.
got sent this image
wonder how many more of these things we’ll see before people start having a real bileful response to this (over and above the fact that a number of people have been warning about exactly this outcome for a while now)
(transcript below)
transcript
title: I gave my mom’s company an Al automation and now she and her coworkers are unemployed
body: So this is eating me alive and I don’t really know where else to put it. I run this little agency that builds these Al agents for staffing firms. Basically the agent pre-screens candidates, pulls the info into a neat report, and sends it back so recruiters don’t waste hours on screening calls. It’s supposed to be a tool, not a replacement.
My mom works at this mid sized recruiting company. She’s always complained about how long it takes to qualify candidates, so I set them up with one of my agents just to test it. It crushed it. Way faster, way cheaper, and honestly more consistent than most of their team.
Fast forward two months and they’ve quietly laid off almost her whole department. Including my mom. I feel sick. Like I built something that was supposed to help people, and instead it wiped out my mom’s job and her team. I keep replaying it in my head like I basically automated my own family out of work.
Pressing F for doubt, looks like a marketing scam to me.
It’s pretty screwed up that humble bragging about putting their own mother out of a job is a useful opening to selling a scam-service. At least the people that buy into it will get what they have coming?
that or some kind of bait
I didn’t dig into the post/username at all so I can’t guesstimate likelihood of this! get where you’re coming from
(…I really need to finish my blog relaunch (this thought brought to you by the explication I was about to embark on in this context))
(((it’s soon.gif tho!)))
Gonna have to agree with zogwarg here. I checked out the Reddit profile and they’re a self-proclaimed entrepreneur whose one-man “agency” has zero clients and yet to even have an idea, attempting to crowdsource the latter on r/entrepreneur.
dude has a post named “from 0 to 1 clients in 48h” where someone calls him out for already claiming to have 17 customers, so it’s reasonable to assume that this guy is full of shit either way
then again, there’s plenty of clueless, could be real, because welcome to current year, where everything is fake, satire is dead and reuters puts the onion out of the business
‘set them up with’
Anybody want to bet if they did it for free?
could go either way tbh
New piece from the Financial Times: Tech utterly dominates markets. Should we worry?
Pulling out a specific point, the article’s noted how market concentration is higher now than it was in the dot-com bubble back in 2000:
You want my overall take, I’m with Zitron - this is quite a narrative shift.
Ai scrapers have managed to bypass anubis on codeberg: https://programming.dev/post/35852706
Not sure why this “member of technical staff at METR” felt the need to post about the lowered productivity of Black people in the southern US states after slavery was abolished. I’m sure it’s nothing.
https://www.lesswrong.com/posts/Zr37dY5YPRT6s56jY/thomas-kwa-s-shortform?commentId=iwGgqsmpY6Tcex5je
He seems to state that after the abolition of slavery, less of the profits from a unit time of labor accrued to the owners of the land in question. The reasons for this is of course a mystery.
It’s always the people you most expect.
Free people have less prodictivity, time to wirehead everyone! A Brave New World!
the eternal mba graduate worldview