ChatGPT couldn’t even generate an accurate alphabet poster for preschoolers.)
And it’s already trained on the whole of human knowledge. That’s what gets me about LLM’s. If it’s already trained on the entirety of human knowledge, and still can’t accurately complete these basic tasks, how do these companies intend to fulfill their extravagant promises?
It’s trained on human writing, not knowledge. It has no actual understanding of meaning or logical connections, just an impressive store of knowledge about language patterns and phrases that occur in the context of the prompt and the rest of the answer. It’s very good at sounding human, and that’s one hell of an achievement.
But its lack of actual knowledge becomes apparent in things like the alphabet poster example or a history professor asking a simple question and getting a complicated answer that sounds like a student trying to seem like they read the books in question but misses the one-sentence-answer that someone who actually knows the books would give. Source, the example I cited being about a third into the actual article
If the best it can do is sound like a student trying to bullshit their way through, then that’s probably the most accurate description: It has been trained to sound knowledgeable, but it’s actually just a really good bullshitter.
Again, don’t get me wrong, as a language processing and generation tool, I think it’s an amazing demonstration of what is possible now. I just don’t like seeing people ascribe any technical understanding to a hyperintelligent parrot.
And it’s already trained on the whole of human knowledge. That’s what gets me about LLM’s. If it’s already trained on the entirety of human knowledge, and still can’t accurately complete these basic tasks, how do these companies intend to fulfill their extravagant promises?
BUILD MORE DATACENTRES!
They don’t. It’s a scam.
It’s trained on human writing, not knowledge. It has no actual understanding of meaning or logical connections, just an impressive store of knowledge about language patterns and phrases that occur in the context of the prompt and the rest of the answer. It’s very good at sounding human, and that’s one hell of an achievement.
But its lack of actual knowledge becomes apparent in things like the alphabet poster example or a history professor asking a simple question and getting a complicated answer that sounds like a student trying to seem like they read the books in question but misses the one-sentence-answer that someone who actually knows the books would give. Source, the example I cited being about a third into the actual article
If the best it can do is sound like a student trying to bullshit their way through, then that’s probably the most accurate description: It has been trained to sound knowledgeable, but it’s actually just a really good bullshitter.
Again, don’t get me wrong, as a language processing and generation tool, I think it’s an amazing demonstration of what is possible now. I just don’t like seeing people ascribe any technical understanding to a hyperintelligent parrot.
So just like their creators.