They talk about artificial “intelligence”, “reasoning” models, “semantic” supplementation, all that babble, but it’s all a way to distract you that large language models do not think. Their output does not show signs of reasoning, unless you’re a disingenuous (or worse, dumb) fuck who cherry picks the “hallucinations” out of the equation.
And even this idiotic “hallucinations” analogy is a way to distract you from the fact that LLMs do not think. It’s there to imply their reasoning is mostly correct, but suddenly it “brainfarts”; no, that is not what happens, the so-called hallucinations are the result of the exact same process as any other output.
No shit.
They talk about artificial “intelligence”, “reasoning” models, “semantic” supplementation, all that babble, but it’s all a way to distract you that large language models do not think. Their output does not show signs of reasoning, unless you’re a disingenuous (or worse, dumb) fuck who cherry picks the “hallucinations” out of the equation.
And even this idiotic “hallucinations” analogy is a way to distract you from the fact that LLMs do not think. It’s there to imply their reasoning is mostly correct, but suddenly it “brainfarts”; no, that is not what happens, the so-called hallucinations are the result of the exact same process as any other output.