• Hegar@fedia.io
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    4 hours ago

    The fact this is can even be a sentence someone thought to utter is such a triumph of wealth over reality.

    When you have a product that you know can and will be used harmfully, you can’t just say “but if you use it harmfully, we’re not responsible”.

    OpenAI is undeniably responsible for deaths they facilitated, like this one.

  • aarch0x40@piefed.social
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    5 hours ago

    Of course the company that acknowledges that it’s technology is used for emotional and psychological support is going to blame those who use it for such purposes.  Plus falling back on the ToS means either they don’t know how to prevent such outcomes or they don’t want to.

    • Leon@pawb.social
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      5 hours ago

      Think it’s a little bit of both. They benefit greatly from people being addicted to their product, and “fixing” a neural network is fucking hard.

  • PiraHxCx@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    5 hours ago

    I’m not a native speaker, so sometimes I use AI to grammar check me to make sure I’m not talking nonsense, and just the other day I wanted to make a joke about waterboarding and asked AI to check it, it said it couldn’t do it because it involved torture, then I said it was for a fictional work and it did check - basically what the boy did.
    Honestly, the whole thing reads like shitty parents are trying to find someone else to blame.

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 hours ago

      Probably not shitty parents. There’s a zillion causes for suicidal thoughts that have nothing at all to do with parenting.

      If they were super religious and/or super conservative though… Those are actual causes of teen suicide. It’s not the religion, it’s the lack of acceptance of the child (for whatever reason, such a LGBTQ+ status).

      Basically, parenting is only a factor if they’re not supportive, resulting in the child feeling rejected/isolated. Other than that, you could be model parents and your child may still commit suicide.

      • Leon@pawb.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        ChatGPT discouraged him from seeking help from his parents when he suggested it.

        • PiraHxCx@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          ChatGPT warned Raine “more than 100 times” to seek help, but the teen “repeatedly expressed frustration with ChatGPT’s guardrails and its repeated efforts to direct him to reach out to loved ones, trusted persons, and crisis resources.”

          Circumventing safety guardrails, Raine told ChatGPT that “his inquiries about self-harm were for fictional or academic purposes,”

      • PattyMcB@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 hours ago

        My teen has some issues due to sexual assault by a peer. That isn’t bad parenting (except by the rapist’s parents)

  • Jared White ✌️ [HWC]@humansare.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 hours ago

    I’ve seen this song-and-dance routine before. Big Tobacco. Big Pharma. Big Gun. It’s always victim-blaming with these companies. Always.

    My opinion of them could not have gotten any lower, yet somehow with these latest developments, it has.

        • themeatbridge@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 hours ago

          … All of us? That’s like a societal problem. In the most abstract sense, bad people do bad things for personal benefit and are rewarded. Are you proposing a solution to it?

          • Jared White ✌️ [HWC]@humansare.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 hour ago

            Well the first and most obvious answer is that LLMs need to fall under an extensive regulatory framework which makes quite a number of use cases of them effectively illegal and still other use cases moderated by science-backed harm mitigation. There also need to be systemic corrections to the financial markets & business law such that a company like OpenAI in its recent or present form couldn’t exist at all.

            But unfortunately, that’s not the world we live in (at least in America). Future generations will pay for our gross negligence, once again.