Gen AI and the people behind it are profoundly anti-human. They want everyone dead, diminished, obsolete, and defeated. Refuse AI features anywhere and everywhere they are placed.
More like wealthy will use anything possible to seize control of everything.
Cottom cuts to the heart of an issue that both AI doomers and AI boosters seem to take for granted: that an AI-dominated future isn’t inevitable.
Given the development plateau tech companies seem to be running into these days, a future built on large language models (LLMs) is far from a lock. (Consider, for example, the fact that after a very recent December update, ChatGPT couldn’t even generate an accurate alphabet poster for preschoolers.)
…
Cottom points to the historical record — for example, the fact that chattel slavery was at one time seen as a preordained fact of life, a myth spread by the wealthiest members of that bygone age.I like this comparison
ChatGPT couldn’t even generate an accurate alphabet poster for preschoolers.)
And it’s already trained on the whole of human knowledge. That’s what gets me about LLM’s. If it’s already trained on the entirety of human knowledge, and still can’t accurately complete these basic tasks, how do these companies intend to fulfill their extravagant promises?
It’s trained on human writing, not knowledge. It has no actual understanding of meaning or logical connections, just an impressive store of knowledge about language patterns and phrases that occur in the context of the prompt and the rest of the answer. It’s very good at sounding human, and that’s one hell of an achievement.
But its lack of actual knowledge becomes apparent in things like the alphabet poster example or a history professor asking a simple question and getting a complicated answer that sounds like a student trying to seem like they read the books in question but misses the one-sentence-answer that someone who actually knows the books would give. Source, the example I cited being about a third into the actual article
If the best it can do is sound like a student trying to bullshit their way through, then that’s probably the most accurate description: It has been trained to sound knowledgeable, but it’s actually just a really good bullshitter.
Again, don’t get me wrong, as a language processing and generation tool, I think it’s an amazing demonstration of what is possible now. I just don’t like seeing people ascribe any technical understanding to a hyperintelligent parrot.
They don’t. It’s a scam.
Hahaha. The wealthy are using THEIR WEALTH to seize control of everything. Always have, always will.
I have a plan guys.
Well. Plans.

People don’t rebel untill their kids start dying from hunger. Rebellion is expensive.
You mean we gotta wait another 10 years?!
“The point of AI is to permit the wealthy to access the benefits of the talented, while preventing the talented from accessing the benefits of wealth.” [Paraphrased, I didn’t remember who originally said it.]
If that wasn’t the case even before Ai, ai just made it more blatant.
AI accelerated it, is all. The wealthy will kill as many people as they can should they ever develop the means to live forever, which is their end goal: immortality. Their healthcare (having money to go wherever and pay whatever) is preventative while everyone else’s is reactionary, and only if some middleman is onboard with the doctor’s medical prognosis. Only those able to serve them with their labor are desirable. Everyone not useful to them should go off and die. This is why they fight healthcare for all so hard.
From 1996-2007 Windows users already told the world that ‘AI Assistants’ can just fuck off. It was called Clippy, and everyone hated it! We will crush you just like we crushed Clippy.
I thought they were going to use AI to share their wealth /s
Trickle-down AInomy!
So not only are they pissing on us and lacking the decency to call it rain, but now they’re teaching machines to do it, too?!
A common mistake, as it were.
Thanks, Professor Obvious
She’s not just saying that the wealthy are trying to seize control — she also discusses how they’re leveraging the rhetoric of inevitable AI in order to build the foundations for future control. Significantly, she identifies this attempt as being driven by anxiety that the wealthy feel about their future control, and discusses strategies for resisting that.
What she’s saying is far from obvious; I’ve seen so many anti-AI, anti-capitalist folk unwittingly perpetuating the rhetorical agenda of the wealthy by accepting the notion that a society ran by super advanced AI is inevitable.
Yeah I call bullshit on that.
Automation is inevitable, doesn’t mean decision making will be.
Having fully automated factory that churns out 100 cars a day is not dystopian if you can have people able to configure that number.
The dystopian part is having only a few with “admin access”.
Dystopian is when they finish the fully automated kill bots that kill you if you don’t get to work on time.
The wealthy feel anxiety about their future control? Why would they need to feel that?
Because they ran out of things to distract the masses, plus they need the plebs in order to maintain their little fiefdoms.












