

Given the trajectory of the world, yeah, let’s go with that
Given the trajectory of the world, yeah, let’s go with that
I like the DNF / vaporware analogy, but did we ever have a GPT Doom or Duke3d killer app in the first place? Did I miss it?
In a literal sense, Google did attempt to make GPT Doom, and failed (i.e. a large language model can’t run Doom).
In a metaphorical sense, the AI equivalent to Doom was probably AI Dungeon, a roleplay-focused chatbot viewed as quite impressive when it released in 2020.
Ed Zitron’s given his thoughts on GPT-5’s dumpster fire launch:
Personally, I can see his point - the Duke Nukem Forever levels of hype around GPT-5 set the promptfondlers up for Duke Nukem Forever levels of disappointment with GPT-5, and the “deaths” of their AI waifus/therapists this has killed whatever dopamine delivery mechanisms they’ve set up for themselves.
Anyways, personal sidenote/prediction: I suspect the Internet Archive’s gonna have a much harder time archiving blogs/websites going forward.
Me, two months ago
Looks like I was on the money - Reddit’s began limiting what the Internet Archive can access, claiming AI corps have been scraping archived posts to get around Reddit’s pre-existing blocks on scrapers. Part of me suspects more sites are gonna follow suit pretty soon - Reddit’s given them a pretty solid excuse to use.
You’re dead right on that.
Part of me suspects STEM in general (primarily tech, the other disciplines look well-protected from the fallout) will have to deal with cleaning off the stench of Eau de Fash after the dust settles, with tech in particular viewed as unequipped to resist fascism at best and out-and-proud fascists at worst.
Iris van-Rooij found AI slop in the wild (determining it as such by how it mangled a word’s definition) and went on find multiple other cases. She’s written a blog post about this, titled “AI slop and the destruction of knowledge”.
New Blood in the Machine about GPT-5’s dumpster fire launch: GPT-5 is a joke. Will it matter?
I wrote yesterday about red-team cybersecurity and how the attack testing teams don’t see a lot of use for AI in their jobs. But maybe the security guys should be getting into AI. Because all these agents are a hilariously vulnerable attack surface that will reap rich rewards for a long while to come.
Hey, look on the bright side, David - the user is no longer the weakest part of a cybersecurity system, so they won’t face as many social engineering attempts on them.
Seriously, though, I fully expect someone’s gonna pull off a major breach through a chatbot sooner or later. We’re probably overdue for an ILOVEYOU-level disaster.
It’ll probably earn a lot of users if and when Github goes down the shitter. They’ve publicly stood with marginalised users before, so they’re already in my good books.
Tante fires off about web search:
There used to be this deal between Google (and other search engines) and the Web: You get to index our stuff, show ads next to them but you link our work. AI Overview and Perplexity and all these systems cancel that deal.
And maybe - for a while - search will also need to die a bit? Make the whole web uncrawlable. Refuse any bots. As an act of resistance to the tech sector as a whole.
On a personal sidenote, part of me suspects webrings and web directories will see a boost in popularity in the coming years - with web search in the shitter and AI crawlers being a major threat, they’re likely your safest and most reliable method of bringing human traffic to your personal site/blog.
Well, what’s next, and how much work is it?
I’m not particularly sure myself. By my guess, I don’t expect one specific profession to be “what’s next”, but a wide variety of professions becoming highly lucrative, primarily those which can exploit the fallout of the AI bubble to their benefit. Giving some predictions:
Therapists and psychiatrists should find plenty of demand, as mental health crisis and cases of AI psychosis provide them a steady stream of clients.
Those in writing related jobs (e.g. copywriters) can likely squeeze hefty premiums from clients with AI-written work that needs fixing.
Programmers may find themselves a job tearing down the mountains of technical debt introduced by vibe-coding, and can probably crowbar a premium out of desperate clients as well. (This one’s probably gonna be limited to senior coders, though - juniors are likely getting the shaft on this front)
As for which degrees will come into high demand, I expect it will be mainly humanities degrees that benefit - either directly through netting you a profession that can exploit the AI fallout, or indirectly through showing you have skills that an LLM can’t imitate.
I didn’t want to be a computing professional. I trained as a jazz pianist
Nice. You could probably earn some cash doing that on the side.
At some point we ought to focus on the real problem: not STEM, not humanities, but business schools and MBA programs.
You’re goddamn right.
Not only that, the reported development of post-quantum cryptography (with NIST having released some finalised encryption standards last year) could give cybersec professionals a headstart on protecting everything if it fully comes to fruition (assuming said cryptography lives up to its billing).
You want me to take a shot in the dark, I expect zero-knowledge proofs will manage to break into the mainstream before quantum computing becomes a thing - minimising the info you give out is good for protecting your users’ privacy, and minimises the amount of info would-be attackers could work with.
The security guys aren’t interested in quantum computing either. Because it doesn’t exist yet. The report’s author seems surprised that “several interviewees believe that quantum computing is in an overhyped phase.”
If and when quantum computing does start making waves, I expect the security guys will start loudly crowing about it.
Its ostensible ability to break most regular encryption schemes over its knee would be a complete fucking nightmare for them, that’s for sure.
Thomasaurus has given their thoughts on using AI, in a journal entry called “I tried coding with AI, I became lazy and stupid)”. Unsurprisingly, the whole thing is one long sneer, with a damning indictment of its effectiveness at the end:
If I lose my job due to AI, it will be because I used it so much it made me lazy and stupid to the point where another human has to replace me and I become unemployable.
I shouldn’t invest time in AI. I should invest more time studying new things that interest me. That’s probably the only way to keep doing this job and, you know, be safe.
To extend that analogy a bit, the dunkfest I noted suggests that a portion of the public views STEM as perfectly okay with the orphan grinder’s existence at best, and proud of having orphan blood on their hands at worst.
As for the motorised orphan grinder you mention, it looks to me like the public viewed its construction as STEM voting for the Leopards Eating People’s Faces Party (with predictable consequences).
Quick update: I’ve checked the response on Bluesky, and it seems the general response is of schadenfreude at STEM’s expense. From the replies, I’ve found:
Humanities graduates directly mocking STEM (Fig. 1,Fig. 2, Fig. 3, Fig. 4, Fig. 5)
Mockery of the long-running “learn to code” mantra (Fig. 1, Fig. 2, Fig. 3, Fig. 4 Fig. 5, Fig. 6)
Claims that STEM automated themselves out of a job by creating AI (Fig. 1, Fig. 2, Fig. 3)
Plus one user mocking STEM in general as “[choosing] fascism and “billions must die”” out of greed, and another approving of others’ dunks on STEM over past degree-related grievances.
You want my take on this dunkfest, this suggests STEM’s been hit with a double-whammy here - not only has STEM lost the status their “high-paying” reputation gave them, but that reputation (plus a lotta built-up grievances from mockery of the humanities) has crippled STEM’s ability to garner sympathy for their current predicament.
New article from the New York Times reporting on an influx of compsci graduates struggling to find jobs (ostensibly caused by AI automation). Found a real money shot about a quarter of the way through:
Among college graduates ages 22 to 27, computer science and computer engineering majors are facing some of the highest unemployment rates, 6.1 percent and 7.5 percent respectively, according to a report from the Federal Reserve Bank of New York. That is more than double the unemployment rate among recent biology and art history graduates, which is just 3 percent.
You want my take, I expect this article’s gonna blow a major hole in STEM’s public image - being a path to a high-paying job was one of STEM’s major selling points (especially compared to the “useless” art/humanities degrees), and this new article not only undermines that selling point, but argues for flipping it on its head.
Not sure myself, but the mods are probably either excluded from being banned by The Wheel™, or unbanned immediately afterwards, just to keep things running smoothly.
In more low-key news, the New Yorker’s given public praise to Blood in the Machine, pulling a year-old review back into the public spotlight.
Its hardly anything new (the Luddites’ cultural re-assessment has been going on since 2023), but its hardly a good sign for the tech industry at large (or AI more specifically) that a major newspaper’s decided to give some positive coverage to 'em.
With that out the way, here’s a sidenote:
When history looks back on the Luddites’ cultural re-assessment, I expect the rise of generative AI will be pointed to as a major factor.
Beyond being a blatant repeat of what the Luddites fought against (automation being used to fuck over workers and artisans), its role in enabling bosses to kill jobs and abuse labour in practically every field imaginable (including fields that were thought safe from automation) has provided highly fertile ground for developing class solidarity.
Another day, another case of “personal responsibility” used to shift blame for systemic issues, and scapegoat the masses for problems bad actors actively imposed on them.
Its not like we’ve heard that exact same song and dance a million times before, I’m sure the public hasn’t gotten sick and tired of it by this point.
Probable hot take: this shit’s probably also hampering people’s efforts to overcome self-serving bias, as well - taking responsibility for your own faults is hard enough in a vacuum, its likely even harder when bad actors act with impunity by shifting the blame to you.