Mine attempts to lie whenever it can if it doesn’t know something. I will call it out and say that is a lie and it will say “you are absolutely correct” tf.
I was reading into sleeper agents placed inside local LLMs and this is increasing the chance I’ll delete it forever. Which is a shame because it is the new search engine seeing how they ruined search engines
Stochastic parrots always bullshit. It can’t lie as it has no concept or care for truth and falsity, but spitting back noise that’s statistically shaped like a signal.
In practice, I noticed the answer is more likely wrong the more specific the question. General questions that have the answer widely available in the training data will more often be there correctly in the LLMs result.