Cottom cuts to the heart of an issue that both AI doomers and AI boosters seem to take for granted: that an AI-dominated future isn’t inevitable.
Given the development plateau tech companies seem to be running into these days, a future built on large language models (LLMs) is far from a lock. (Consider, for example, the fact that after a very recent December update, ChatGPT couldn’t even generate an accurate alphabet poster for preschoolers.)
…
Cottom points to the historical record — for example, the fact that chattel slavery was at one time seen as a preordained fact of life, a myth spread by the wealthiest members of that bygone age.
ChatGPT couldn’t even generate an accurate alphabet poster for preschoolers.)
And it’s already trained on the whole of human knowledge. That’s what gets me about LLM’s. If it’s already trained on the entirety of human knowledge, and still can’t accurately complete these basic tasks, how do these companies intend to fulfill their extravagant promises?
It’s trained on human writing, not knowledge. It has no actual understanding of meaning or logical connections, just an impressive store of knowledge about language patterns and phrases that occur in the context of the prompt and the rest of the answer. It’s very good at sounding human, and that’s one hell of an achievement.
But its lack of actual knowledge becomes apparent in things like the alphabet poster example or a history professor asking a simple question and getting a complicated answer that sounds like a student trying to seem like they read the books in question but misses the one-sentence-answer that someone who actually knows the books would give. Source, the example I cited being about a third into the actual article
If the best it can do is sound like a student trying to bullshit their way through, then that’s probably the most accurate description: It has been trained to sound knowledgeable, but it’s actually just a really good bullshitter.
Again, don’t get me wrong, as a language processing and generation tool, I think it’s an amazing demonstration of what is possible now. I just don’t like seeing people ascribe any technical understanding to a hyperintelligent parrot.
I like this comparison
And it’s already trained on the whole of human knowledge. That’s what gets me about LLM’s. If it’s already trained on the entirety of human knowledge, and still can’t accurately complete these basic tasks, how do these companies intend to fulfill their extravagant promises?
BUILD MORE DATACENTRES!
They don’t. It’s a scam.
It’s trained on human writing, not knowledge. It has no actual understanding of meaning or logical connections, just an impressive store of knowledge about language patterns and phrases that occur in the context of the prompt and the rest of the answer. It’s very good at sounding human, and that’s one hell of an achievement.
But its lack of actual knowledge becomes apparent in things like the alphabet poster example or a history professor asking a simple question and getting a complicated answer that sounds like a student trying to seem like they read the books in question but misses the one-sentence-answer that someone who actually knows the books would give. Source, the example I cited being about a third into the actual article
If the best it can do is sound like a student trying to bullshit their way through, then that’s probably the most accurate description: It has been trained to sound knowledgeable, but it’s actually just a really good bullshitter.
Again, don’t get me wrong, as a language processing and generation tool, I think it’s an amazing demonstration of what is possible now. I just don’t like seeing people ascribe any technical understanding to a hyperintelligent parrot.
So just like their creators.