• Zetta@mander.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    7 hours ago

    The problem is that many people don’t understand this no matter how often we bring it up. I personally find LLMs to be very valuable tools when used in the right context. But yeah, the majority of people who utilize these models don’t understand what they are or why they shouldn’t really trust them or take critical advice from them.

    I didn’t read this article, but there’s also the fact that some people want biased or incorrect information from the models. They just want them to agree with them. Like, for instance, this teen who killed themself may not have been seeking truthful or helpful information in the first place, but instead just wanted to agree with them and help them plan the best way to die.

    Of course, OpenAI probably should have detected this and stopped interacting with this individual.

    • Timecircleline@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      The court documents with extracted text are linked in this thread. It talked him out of seeking help and encouraged him not to leave signs of his suicidality out for his family to see when he said he hoped they would stop him.