I have had in person conversations with multiple people who swear they have fixed the AI hallucination problem the same way. “I always include the words ‘make sure all of the response is correct and factual without hallucinating’”
These people think they are geniuses thanks to just telling the AI not to mess up.
Thanks to being in person with a rather significant running context, I know they are being dead serious, and no one will dissuade them from thinking their “one weird trick” works.
All the funnier when, inevitably, they get screwed up response one day and feel all betrayed because they explicitly told it not to screw up…
But yes, people take “prompt engineering” very seriously. I have seen people proudly display their massively verbose prompt that often looked like way more work than to just do the things themselves without LLM. They really think it’s a very sophisticated and hard to acquire skill…
“Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…
I have had in person conversations with multiple people who swear they have fixed the AI hallucination problem the same way. “I always include the words ‘make sure all of the response is correct and factual without hallucinating’”
These people think they are geniuses thanks to just telling the AI not to mess up.
Thanks to being in person with a rather significant running context, I know they are being dead serious, and no one will dissuade them from thinking their “one weird trick” works.
All the funnier when, inevitably, they get screwed up response one day and feel all betrayed because they explicitly told it not to screw up…
But yes, people take “prompt engineering” very seriously. I have seen people proudly display their massively verbose prompt that often looked like way more work than to just do the things themselves without LLM. They really think it’s a very sophisticated and hard to acquire skill…
“Do not hallucinate”, lol… The best way to get a model to not hallucinate is to include the factual data in the prompt. But for that, you have to know the data in question…
“ChatGPT, please do not lie to me.”
“I’m sorry Dave, I’m afraid I can’t do that.”
That’s incorrect because in order to lie, one must know that they’re not saying the truth.
LLMs don’t lie, they bullshit.
Have you tried to not be depressed?