Isn’t this how we get the Matrix?
In the matrix humans were used as batteries, not processors. Although that was the original writing before an exec thought the average person would be “confused” by that
As much as I think current IA is another bullshit marketing term, we will see when AI has been around for at least a few centuries, I don’t think we need thousands of years like brains did.
This is the thing about AI criticism. AI in the LLM sense we know today has been publicly available for a few years, in development for a couple decades. Any criticism about how stupid it is will be irrelevant in 6-12 months. Look at the people trashing AI 2 years ago, how it would constantly hallucinate and produce gibberish code. Now it’s a lot better on both regards. In 2 more years, what then? It’ll be better. Yes we’ll hit the LLM ceiling but there’s a lot of fine tuning to be done.
Criticize AI for the environmental effects, the inequality that it’s enhancing, how the rich and powerful have access to the AIs that know too much about us. Criticize it for lacking the reality of human composed text. But criticizing it on technical grounds is not the right angle.
FWIW if you asked both an AI and a HS student to crank out an essay on a random topic, the HS student not having studied the topic, the HS student would be the one making more shit up. Human brains have limitations too. AI and human brains aren’t directly comparable.
Tbf a lot of energy goes into producing the food we consume. But nowhere near what it costs to run this AI garbage.
Idea meat FTW
AI is a parasite.
So are many idea meat lumps
Are we talking incandescent bulbs or led bulbs?
Strong led or (very) weak incandescent. It’s about 20 Watts (at least, that’s what the popsci meme reports)
I dunno, I get real hungry on days when I have to think a lot
There are probably 2 reasons for this:
-
There’s probably a lot more motor control going on than you would expect when you need to think (writing, fidgeting, etc.).
-
Your brain wants sugars, so when you run out of immediately-available glycogen to break down, you will want to eat more in order to keep thinking. Breaking down fats wont supply energy fast enough (in the short term) to keep complex thought running continuously.
-
Lead bulbs ofc.
This guy lead bulbs.
Edit: removing my far too serious comment.
Tldr Poe’s law, I can’t tell if this is a critique of AI, or of AI critics.Ah yeah don’t take shit post too seriously.
For me, this meme was just a Matrix joke.
Win at what?
Win at being shit.
God creates man.
Man creates god.
Man kills god.
Man creates AI.
AI kills man.
AI destroys earth.
Crocodile people rule the galaxy until the heat death of the universe.-Nietzsche
Yes
Also you can run most models on a wide range of fuels. Sucrose, glucose, maltose, ethanol, molybdenum disulfide, small rocks, some grass. Really anything.
Yes I think the point is just a shit ton more than the human brain, generally.
Whoooosh
The advanced fat-based “thinking” machine at work.
The thing in the right is also a glorified prediction engine. I suppose whoever made this is steeped in religious dogma but humans aren’t that advanced either. We just predict things.
Inb4 the advanced fat-based brains brigade me using their advanced fat-based prediction engines 🙄
It’s still leagues ahead of LLMs. I’m not saying it’s entirely impossible to build a computer that surpasses the human brain in actual thinking. But LLMs ain’t it.
The feature set of the human brain is different, in a way that you can’t compensate for by just increasing scale. So you get something that works but not quite, by using several orders of magnitude more power.
We optimize and learn constantly. We have chunking, whereby a complex idea becomes simpler for our brain once it’s been processed a few times, and this allows us to progressively work on more and more complex ideas without an increase in our working memory. And a lot of other stuff.
If you spend enough time using LLMs you must notice how their working is different from your own.
I think the moat is that when a human is born and their world model starts “training”, it’s already pre-trained by millions of years of evolution. Instead of starting from random weights like any artificial neural network, it starts with usable stuff, lessons from scenarios it may never encounter but will nevertheless gain wisdom from.
I don’t spend time working with LLMs. I’d agree we have additional features. For example I think while the computers currently can guess, we can guess and check in a meaningful way. But that’s not what the meme was about. I would argue the meme was barely about anything other than “ai bad, me smort”. Ironic since the LLM could probably make a better one even if it “doesn’t understand”, whatever understand is.
Do you not have an internal experience?
You can’t prove that I do, I can’t prove that you do. Those metaphysical arguments don’t have much punch in a scientific conversation.
Of course I do, that doesn’t mean we understand it.
I don’t need to understand consciousness to be confident a llm is not conscious.
Dogs are glorified barking machines. Is a tape playing a tape of a dog barking have the consciousness or intellegence of a dog?
Are dogs conscious? What about mites?
Probably, their interactions with humans/dogs suggest they have a “theory of mind”.
Mites? No.
Sorry, but I’m not a prediction engine, I am capable of abstract thought, and actually understanding the meaning of the words.
I can also process all kinds of different data and make connection between then which includes emotional connections.
Another cool trick, I also have this thing called a consciousness which is something I can’t explain or put into words but I know it exists. All under 20W.
So you have something you don’t understand and can’t prove exists. Like a hallucination?
Tbh the rest isn’t worth responding to. Emotional connections? Come on, you’re a horny bag of chemical soup. None of this is real. Humans mostly guess what reality is anyway.
this thing called a consciousness which is something I can’t explain or put into words but I know it exists. All under 20W.
Maybe you’d be able to if you dial it to 25W
Nonetheless, the human brain is a better prediction engine.
Yes, we have better glorified prediction engines in general









