

Someone I know called AI “a non-invasive procedure to lobotomise people” after I mentioned this Pivot to AI, and its stuck with me ever since
Someone I know called AI “a non-invasive procedure to lobotomise people” after I mentioned this Pivot to AI, and its stuck with me ever since
…You know, if I actually believed in the whole AGI doom scenario (and bought into Eliezer’s self-hype) I would be even more pissed at him and sneer even harder at him. He basically set himself up as a critical savior to mankind, one of the only people clear sighted enough to see the real dangers and most important question… and then he totally failed to deliver. Not only that he created the very hype that would trigger the creation of the unaligned AGI he promised to prevent!
As the cherry on top of this shit sundae, the bubble caused by said hype dealt devastating damage to the Internet and the world at large in spite of failing to create the unaligned AGI Yud was doomsaying about, and made people more vulnerable to falling for the plagiarism-fueled lying machines behind said bubble.
New (paywalled) 404 Media: Google’s AI Is Destroying Search, the Internet, and Your Brain
Not a sneer, but still inexplicably funny: You Can Now Venmo the Government to Help Pay Off National Debt
Now we need to make a logic puzzle involving two people and one cup. Perhaps they are trying to share a drink equitably. Each time they drink one third of remaining cup’s volume.
Step one: Drink two-thirds of the cup’s volume
Step two: Piss one sixth of the cup’s volume
Problem solved
Long-term, I’m expecting itch to dive in popularity from this - they’ve nuked much of the trust they’ve built up over the years with this.
Found a neat mini-sneer in the wild: It’s rude to show AI output to people
Two ferrymen and three boats are on the left bank of a river. Each boat holds exactly one man. How can they get both men and all three boats to the right bank?
Officially, you can’t. Unofficially, just have one of the ferrymen tow a boat.
Caught a particularly spectacular AI fuckup in the wild:
(Sidenote: Rest in peace Ozzy - after the long and wild life you had, you’ve earned it)
Found a banger in the comments:
Hey, remember the thing that you said would happen?
The part about condemnation and mockery? Yeah, I already thought that was guaranteed, but I didn’t expect to be vindicated so soon afterwards.
EDIT: One of the replies gives an example for my “death of value-neutral AI” prediction too, openly calling AI “a weapon of mass destruction” and calling for its abolition.
Managed to stumble across two separate attempts to protect promptfondlers’ feelings from getting hurt like they deserve, titled “Shame in the machine: affective accountability and the ethics of AI” and “AI Could Have Written This: Birth of a Classist Slur in Knowledge Work”.
I found both of them whilst trawling Bluesky, and they’re being universally mocked like they deserve on there.
I don’t keep track, I just put these together when I’ve got an interesting tangent to go on.
Discovered some commentary from Baldur Bjarnason about this:
Somebody linked to the discussion about this on hacker news (boo hiss) and the examples that are cropping up there are amazing
This highlights another issue with generative models that some people have been trying to draw attention to for a while: as bad as they are in English, they are much more error-prone in other languages
(Also IMO Google translate declined substantially when they integrated more LLM-based tech)
On a personal sidenote, I can see non-English text/audio becoming a form of low-background media in and of itself, for two main reasons:
First, LLMs’ poor performance in languages other than English will make non-English AI slop easier to identify - and, by extension, easier to avoid
Second, non-English datasets will (likely) contain less AI slop in general than English datasets - between English being widely used across the world, the tech corps behind this bubble being largely American, and LLM userbases being largely English-speaking, chances are AI slop will be primarily generated in English, with non-English AI slop being a relative rarity.
By extension, knowing a second language will become more valuable as well, as it would allow you to access (and translate) low-background sources that your English-only counterparts cannot.
New Ed Zitron: The Hater’s Guide To The AI Bubble
(guy truly is the Kendrick Lamar of tech, huh)
New science-related development - The NIH Is Capping Research Proposals Because It’s Overwhelmed by AI Submissions
Found an archive of vibe-coding disasters recently - recommend checking it out.
Found a good security-related sneer in response to a low-skill exploit in Google Gemini (tl;dr: “send Gemini a prompt in white-on-white/0px text”):
I’ve got time, so I’ll fire off a sidenote:
In the immediate term, this bubble’s gonna be a goldmine of exploits - chatbots/LLMs are practically impossible to secure in any real way, and will likely be the most vulnerable part of any cybersecurity system under most circumstances. A human can resist being socially engineered, but these chatbots can’t really resist being jailbroken.
In the longer term, the one-two punch of vibe-coded programs proliferating in the wild (featuring easy-to-find and easy-to-exploit vulnerabilities) and the large scale brain drain/loss of expertise in the tech industry (from juniors failing to gain experience thanks to using LLMs and seniors getting laid off/retiring) will likely set back cybersecurity significantly, making crackers and cybercriminals’ jobs a lot easier for at least a few years.
New piece from Brian Merchant, and a new edition of AI Killed My Job just dropped