• Jared White ✌️ [HWC]@humansare.social
    link
    fedilink
    English
    arrow-up
    57
    ·
    12 days ago

    Who knew that “simulating” human conversations based on extruded text strings that have no basis in grounded reality or fact could send people into spirals of delusion?

  • minorkeys@lemmy.world
    link
    fedilink
    arrow-up
    35
    ·
    edit-2
    12 days ago

    Are companies who force employees to use LLMs going to be liable for the mental health issues they produce?

  • pyrinix@kbin.melroy.org
    link
    fedilink
    arrow-up
    36
    arrow-down
    3
    ·
    12 days ago

    Talking to AI Chatbots is about as useful as talking to walls, only that we decided to have those walls talk back to us.

    And they aren’t saying anything insightful or useful.

  • FosterMolasses@leminal.space
    link
    fedilink
    English
    arrow-up
    24
    ·
    12 days ago

    One recent peer-reviewed case study focused on a 26-year-old woman who was hospitalized twice after she believed ChatGPT was allowing her to talk with her dead brother

    I feel like the bar for the turing test is lower than ever… You can’t tell ChatGPT apart from your own relatives??

    • potoooooooo ✅️@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      12 days ago

      My cousin lost her young daughter a few years back. At Christmas, she had used AI to put her daughter in her Christmas photo. I didn’t have words, because it made her so happy, and I can’t fathom her grief, but man. Felt pretty fucked.

      • TheOakTree@lemmy.zip
        link
        fedilink
        arrow-up
        3
        ·
        11 days ago

        I feel you. I can’t deny the comfort it brought her, but I also can’t help but feel like it is training her to reject her grief.

        Not that I’m in any position to pass judgement. I just hope it doesn’t lead to anything more severe.

    • Bonifratz@piefed.zip
      link
      fedilink
      English
      arrow-up
      27
      ·
      12 days ago

      That’s what the article says, yes:

      “The technology might not introduce the delusion, but the person tells the computer it’s their reality and the computer accepts it as truth and reflects it back, so it’s complicit in cycling that delusion,” Sakata told the WSJ.

      • Jax@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        ·
        12 days ago

        Thing that tells you exactly what you want to hear causes delusions?

        Whaaat?

        I completely understand why articles like this need to exist. Information about what ‘AI’ actually is needs to be spread. That being said, I also can’t remove myself from the impression that this is just incredibly obvious. Like one of those studies about whether a dog actually loves their owner by going to lengths such as an MRI of their brain while looking at their owner.

        Like, thank you mystery researcher on the internet — but you could have saved the helium by just sticking to Occam’s Razor.

  • Zacryon@feddit.org
    link
    fedilink
    arrow-up
    9
    arrow-down
    5
    ·
    12 days ago

    I’d say know your tools. People misusing “stuff” and being vulnerable to it in general is nothing new. Yet, in a lot of cases, we rely on independence and maturity in the decisions people make. This is no different to LLMs. However, of course meaningful (technological) safeguards should be implemented wherever possible.

    • Amberskin@europe.pub
      link
      fedilink
      arrow-up
      6
      ·
      12 days ago

      By their own nature, there is no way to implement robust safeguards in a LLM. The technology is toxic and the best that could happen is anything else, hopefully not based on brute forcing the production of a stream of tokens, is developer and makes obvious LLMs are a false path, a road that should not be taken.

  • data_science_rocks@scribe.disroot.org
    link
    fedilink
    arrow-up
    5
    arrow-down
    27
    ·
    12 days ago

    You increase the sample size, you increase the number of hits. Proportionally AI is still just as safe. What a bullshit opinion piece. Inconsequential just like the fucks agreeing with this shit take.

    • prole@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      16
      arrow-down
      1
      ·
      12 days ago

      You increase the sample size, you increase the number of hits.

      Do you think statisticians aren’t well aware of this?

      • data_science_rocks@scribe.disroot.org
        link
        fedilink
        arrow-up
        2
        arrow-down
        14
        ·
        12 days ago

        I am a fucking statistician. And you need a fucking control group to establish causality.

        Gtfo if you don’t understand this basic principle.

        The article and your argument are both entirely devoid of substance.

        • FosterMolasses@leminal.space
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          12 days ago

          If the statisticians involved in this case study are anywhere close to as unhinged as you are then it’s no wonder they got those results lol

          • NoModsNoMasters@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            11 days ago

            Homie been smokin’ them data science rocks, it seems.

            Literally made an account on this instance just to let them know I think they’re fucking dense, but I decided they’re not even worth interacting with personally.

    • Bonifratz@piefed.zip
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      12 days ago

      Huh? The whole point of this emerging scientific debate is that AI use might be proportionally unsafe, i. e. it might be a risk factor causing and/or exacerbating psychosis. Now sure this is still just a hypothesis and it’s too early to make definite epidemiological statements, but it’s just as wrong to blankly state that AI is “still just as safe”.

      • data_science_rocks@scribe.disroot.org
        link
        fedilink
        arrow-up
        1
        arrow-down
        20
        ·
        12 days ago

        “just as safe” is a relational, not absolutist, statement. I’m saying AI is at X level of safety, and more cases emerging does not imply an increasing risk of psychosis. That risk is where it’s always been.

        You’re twisting my words because you’re likely one of those brain-dead AI haters.

        I don’t particularly love or hate AI, the difference is I look at it critically instead of emotionally. If the population at large had the same X propensity for psychosis as the rate seen with AI usage, that just means it’s correlation without causation.

        • Bonifratz@piefed.zip
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          12 days ago

          Alright, but the point is that the “X level of safety” AI is at might be a dangerous level in the first place. I don’t think anybody is arguing that AI got more dangerous as a psychosis risk factor over the past year or so, they’re arguing that AI was a risk factor to begin with, and with increased AI use more evidence of this turns up. So you saying that the inherent risk of AI hasn’t changed is kind of a moot point because that’s not what the debate is about.

          Also notice that I clearly said it’s too early to tell one way or the other, so there’s no reason to malign me as uncritical.

          • data_science_rocks@scribe.disroot.org
            link
            fedilink
            arrow-up
            2
            arrow-down
            14
            ·
            12 days ago

            You ignored my last paragraph. Yes it’s too early to tell, hence the opinion piece saying “Almost Certainly Linked To” is a distortion of reality. It’s laughably biased, and inductive in less-critical readers.

            • Bonifratz@piefed.zip
              link
              fedilink
              English
              arrow-up
              9
              ·
              12 days ago

              I can agree with that. (As an aside, I think scientific findings are almost always exaggerated like this in popular journalism.)

              I’d say the long and short of it is that we simply don’t (and can’t) know yet. But I think more research on possible links between AI and psychotic delusions is definitely useful, because I find the idea of a connection plausible.

        • ZDL@lazysoci.al
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          11 days ago

          I don’t particularly love or hate AI …

          Says the person calling people “fucks agreeing with this shit take”, and “brain-dead AI haters” and “less-critical readers” and just in this thread alone. Who knows what else I’d find in looking in your full posting history.

          Not a very convincing act, even for a clank-fucker.

          • data_science_rocks@scribe.disroot.org
            link
            fedilink
            arrow-up
            1
            arrow-down
            4
            ·
            11 days ago

            Yes. Because taking a side is a shit take. Defending an article taking a side is a shit take.

            Whatever sort of “argument” you think you have by cherry-picking is a shit take.