ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

    • JohnEdwa@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 months ago

      Because language learning models don’t actually understand what is truth or what is real, they just know how humans usually string words together so they can conjure plausible readable text. If your training data contains falsehoods, it will learn to write them.

      To get something that would benefit from knowing both sides, we’d need to first create a proper agi, artificial general intelligence, with the ability to actually think.

    • CileTheSane@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      11 months ago

      Your friend tells you about his new job:
      He sits at a computer and a bunch of nonsense symbols are shown on the screen. He has to guess which symbol comes next. At first he was really bad at it, but over time he started noticing patterns; the symbol that looks like 2 x’s connected together is usually followed by the symbol that looks like a staff.
      Once he started guessing accurately on a regular basis they started having him guess more symbols that follow. Now he’s got the hang of it and they no longer tell him if he’s right or not. He has no idea why, it’s just the job they have him.
      He shows you his work one day and you tell him those symbols are Chinese. He looks at you like you’re an idiot and says “nah man, it’s just nonsense. It does follow a pattern though: this one is next.”

      That is what LLM are doing.