• underisk@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    You can’t make it detect it or promise not to make it.

    This is how you know these things are fucking worthless because the people in charge of them think they can combat this by using anti hallucination clauses in the prompt as if the AI would know how to tell it was hallucinating. It already classified it as plausible output by creating it!

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      They try to do security the same way, by adding “pwease dont use dangerous shell commands” to the system prompt.

      Security researchers have dubbed it “Prompt Begging”

      • underisk@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 day ago

        “On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”

        Its been over a hundred years since this quote and people still think computers are magic.