I’ve had this conversation too many times to not make a meme out of it.
Having lost my job in part due to AI, I’m just so tired of it already.
How long do I have to wait for the bubble to pop?
About 10 years
Cute, no one’s really afraid of LLMs like that - it’s not going to create AGI, it’s going to waste shittons of resources to create shitty midjourney images and wreck the fucking environment. and spike the cost of computation. for what? where’s the fucking killer app already? come on it’s been YEARS, what, so I don’t have to type an email reply? that’s what all this bullshit is for? what a load of horse shit
https://bsky.app/profile/bennjordan.bsky.social/post/3mcm7wmwm3k2d
that’s ONE fucking data center. they want hundreds more.
get fucked with this “ai doomers” strawman bullshit
There are definitely morons who think that LLMs are a few months away from Terminator. They don’t have much overlap with the people who complain about real issues with LLMs, like the ones you mention.
The way it’s advertised is as if it AI is there to replace people thinking, everyone in AI ads is so dumb and seemingly incapable of independent thought.
They’re desperate for people to be reliant, but there’s no real uses cases beyond basic idiation.
AI way is my indoor plant growing towards the window? Doesn’t it love me?
Real AI ad.
Yep. The big names - goog, ms, meta, x etc., flailing wildly, trying again and again to inject the shit into their products is smacking of desperation.
All this bullshit is for line go up. And its mostly working, so far.
However, the bankers heavily involved in financing AI datacenters have become nervous and started approaching insurance firms for coverage in case the projects fail… And the hedge funds have had low, 0 or negative ROI for the last ~4 years due to the prior failures of the Metaverse, NFTs, and now AI not paying off yet… So new funds are drying up on two fronts, and if they don’t magically become profitable in the next year then the line is gonna go down, hard.
Mind sharing some sorces? Not that I don’t believe you, I just want to read more good news.
Asking for sources is always welcome with me.
Here’s a deep dive from Ed Zitron into the whole AI/LLM industry that details the heavy investment from several key banks (Deutchebank being one), and the shrinking finance availability from traditional means (bank loans, hedge funds, managed funds). It’s long but it’s really worth a read if you have a spare hour or so.
https://www.wheresyoured.at/the-enshittifinancial-crisis/A glaring tell that I don’t recall him highlighting is that the hyperscalers have largely outsourced the risk of AI investment to others. META, Google, and Microsoft are making small bets on AI comparitively - they’re using cash assets they have as profits from other business models, which are still significant (measured in low billions) but dont require them to take loans or leverage themselves. This means they are playing it very cautiously, all the while they’re shoving AI into all their products to try to make it seem like they’re all-in and it’s ‘the next big thing’, which is helping their stock prices in the investor frenzy. Most of the investment capital required for the AI boom is going into hardware, datacenters and direct investment in the software development - and that’s mostly being avoided by the big guys. This allows them to minimize risk and still having a decent win if it takes off. Conversely If/when the bubble bursts they’ll still take a hit, but they’ll also still be making money via other streams so it’ll be a bump in the road for them - compared to what will happen to OpenAI, Athropic, Stability, the datacenters and their financiers.
https://archive.is/WwJRg (NYTimes article).
… are you under the impression that doomers aren’t real? I mean, maybe they don’t really believe the bullshit they’re spewing, but they talk endlessly about the dangers of AI and seem to actually believe LLMs are actively dangerous. Have you just not heard of these dorks? They’re, like, near-term human extinction folks who think AGI is just around the corner and will kill us all.
There’s TONS of valid issues. You’re painting everyone who criticizes AI as a doomer, and it’s specious, lazy, and does nothing to help your argument.
Just because a tiny portion of people who despise LLMs think there’s an AGI/AI/Superintelligence risk doesn’t mean that worry is shared throughout the vast majority of AI’s critics.
Your argument is weak, and calling them ‘dorks’ doesn’t support your thesis.
No I’m not? I’m painting anyone who is a doomer as a doomer, as in, specifically the people who think AGI will kill us all. They don’t care about valid issues, they care specifically about this stupid nonsense that they read about on lesswrong.com
This is a real subset of people and the meme is making fun of them, because they’re just feeding into the AI hype bubble.
No one is saying that the valid issues surrounding this tech bubble aren’t real, but that has little to do with the doomer cohort.
Elon Musk is one of them, kinda famously. Remember when there was that whole movement to rein in OpenAI for 6 months? Elon backed that, while starting xAI.
It’s a weird intersection of promoting this idea that LLMs are a form of superintelligence and therefore harmful, while also working on your own version of it (That remains under your control of course)
I’m not worried about AI turning into Skynet. That’s like item # 1,576,549 on my list of reasons I hate AI.
It’s literally everything else about AI:
- Creating scarcity in PC parts and components to feed the AI bros’ insatiable hunger for growth.
- Rent-seekers like Bezos trying to spin the above into yet more “you’ll own nothing and be happy”
- Destroying farmland or otherwise affordable property for sprawling datacenters that create like maybe 12 jobs that can’t be done remotely.
- We never or barely addressed the massive disinformation problem and here’s AI turning it up to 11 by making it trivial to shit out whatever narrative you want at massive scale.
- Stealing from content creators to feed itself and then denying those same creators web traffic
- Just being a bullshit machine that people blindly believe
- Shoving it into everything when no one asks for it and many don’t want it
- Causing electricity rates to go up and subjecting people to either rolling blackouts or dirty generator exhaust because of demand
- The massive use of water for cooling
- Not actually able to do jobs but C-levels laying off people and creating unemployment nonetheless
- C-levels forcing people to use “AI” for the express purpose of training it to do their jobs so they can be laid off
- The goddamned pile of lies upon lies gaslighting us into believing this is to make our lives better.
- Ignoring the loss of jobs, all the supposed savings from “AI” goes to the top; shit isn’t getting any less expensive. It’s even getting more expensive to take a literal shit; the water and poo plants run on electricity, and those risings costs get passed down.
Israel has already been using AI to generate kill lists for their bombs and sniper drones.
https://www.972mag.com/lavender-ai-israeli-army-gaza/
The problem isn’t that skynet turns malicious. The problem is that AI is dumb and evil people have already given it the helm.
first and formoest, it seems to be intended goal, massive layoffs using AI as an convenient excuse without actually firing them for other reasons.
The real danger isn’t that they create an AGI… It’s that they convince enough people that they have. People have a propensity to believe stupid, impossible things even when they DON’T look plausible on the surface.
The way generative intelligence works, it fundamentally cannot ever be AGI. Hell, if we didn’t recently update the definition of AI to include “stuff that mimics intelligence”, current AI wouldn’t be considered AI.
But reliance on tokens, knowledge, and algorithms is anti-intelligemce. There is no format of critical thought, problem solving, or analysis. It’s just a knowledge compiler with bad knowledge for reference and no ability to understand the knowledge prior to repeating it back.
What is an ai doomer in this case? I know llm’s can’t get to agi, but I’m confident we are going to try to burn the world down to prove it.
Not sure if I’m coining a term here. I meant people whose primary existential concern right now is ChatGPT becoming Skynet and enslaving or hunting humanity to extinction. There are hundreds of them on r/collapse on reddit. Apparently there’s a whole youtube rabbit hole that convinces people of this. Edit: I guess I would also include people who think LLMs will imminently–and actually–replace all human workers; as opposed to human workers being laid off due to promises from AI salespeople that could never live up to reality.
there’s SO VERY MUCH to criticize about LLMs that making shit up isn’t needed. It’s garbage-ware already. stop coming up with strawmen.
Removed by mod
Removed by mod
Agi Has already been reached, it means: Albanian Generative Influencer Still not beating the idiocracy allegations tho.








