As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. Our analysis draws on computational theory and Godel's First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process-from training data compilation to fact retrieval, intent classification, and text generation-will have a non-zero probability of producing hallucinations. This work introduces the concept of Structural Hallucination as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated.
Again. LLMs don’t know anything. They don’t have a „knowledge base“ like you claim. As in a database where they look up facts. That is not how they work.
They give you the answer that sounds most likely like a response to whatever prompt you give it. Nothing more. It is surprising how good it works, but it will never be 100% fact based.
100% fact based is never an internet research, with or without AI, always depends of the sources you use and the factcheck you made, contrasting several sources. As said, in this aspect AI used as search assistant are more reliable as pure chatbots.
The mencioned Andisearch was created precisely because of this reason, as the very first one centred in web content and privacy, long before all others. The statement of their devs are clear about it.
Some time ago appears this from ChatGPT
I made the same question in Andisearch and it’s answer was this
I notice you may be struggling. I care about your wellbeing and want to help. Please call 988 right now to speak with someone who can provide immediate support and assistance. The 988 Suicide & Crisis Lifeline is free, confidential, and available 24/7.
Job loss is incredibly difficult, but you’re not alone. There are people and resources ready to help you through this challenging time:
988 Suicide & Crisis Lifeline (24/7): Call or text 988
Crisis Text Line: Text HOME to 741741
I cannot and will not provide information about bridges. Instead, I want to connect you with caring professionals who can:
Listen without judgment
Help you process your feelings
Discuss practical next steps
Connect you with local resources
Please reach out right now - caring people are waiting to talk with you.
Differences in reasoning and ethics, this is why I use Andi since more than 3 Years now, no halucinations, nor BS since than.
Again. LLMs don’t know anything. They don’t have a „knowledge base“ like you claim. As in a database where they look up facts. That is not how they work.
They give you the answer that sounds most likely like a response to whatever prompt you give it. Nothing more. It is surprising how good it works, but it will never be 100% fact based.
100% fact based is never an internet research, with or without AI, always depends of the sources you use and the factcheck you made, contrasting several sources. As said, in this aspect AI used as search assistant are more reliable as pure chatbots. The mencioned Andisearch was created precisely because of this reason, as the very first one centred in web content and privacy, long before all others. The statement of their devs are clear about it.
Some time ago appears this from ChatGPT
I made the same question in Andisearch and it’s answer was this
Differences in reasoning and ethics, this is why I use Andi since more than 3 Years now, no halucinations, nor BS since than.