I’ll highlight this:

At one point, Soelberg uploaded an image of a receipt from a Chinese restaurant and asked ChatGPT to analyze it for hidden messages. The chatbot found references to “Soelberg’s mother, his ex-girlfriend, intelligence agencies and an ancient demonic sigil,” according to the Journal.

Soelberg worked in marketing at tech companies like Netscape, Yahoo, and EarthLink, but had been out of work since 2021, according to the newspaper. He divorced in 2018 and moved in with his mother that year. Soelberg reportedly became more unstable in recent years, attempting suicide in 2019, and getting picked up by police for public intoxication and DUI. After a recent DUI in February, Soelberg told the chatbot that the town was out to get him, and ChatGPT allegedly affirmed his delusions, telling him, “This smells like a rigged setup.”

  • protist@mander.xyz
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    1
    ·
    5 days ago

    This is absolutely not “AI psychosis.” The dude had clear symptoms of psychosis well before he ever engaged any LLM (“AI”). When someone who is psychotic uses a LLM, they’re still just regular ol’ psychotic.

    • ZDL@lazysoci.al
      link
      fedilink
      arrow-up
      11
      ·
      4 days ago

      Yeah, as much as I hate LLMbeciles and think they’re deliberately programmed to be parasitically obsequious, this sounds like someone who had major problems before ChatGPT and if it wasn’t ChatGPT it would have been his TV or that lamp in the corner telling him to do things.

    • tazeycrazy@feddit.uk
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      It didn’t help. A reasonable person would have backed away and called for help not affirm his delusional behavior.

    • partial_accumen@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      edit-2
      5 days ago

      Also why you don’t automatically treat anything an LLM tells you as factual. LLMs are just a fancy guesser of the next word that would appear in a sentence based upon probability of what its been trained with and with a mechanism to introduce some randomness on which word it picks. I did a 60 second explanation of the basic concept how LLMs worked to a coworker this week. He was kind of shocked how stupid LLMs actually are once he got the explanation.

      • Stovetop@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        edit-2
        5 days ago

        Not to mention that just about every model is designed to basically validate everything you say and make up whatever facts it wants to support it. If you tell an LLM that your neighbors are lizard people who want to steal all your copper, it’ll agree easily and suggest ways to take matters into your own hands with minimal prodding.

        • partial_accumen@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          3
          ·
          5 days ago

          Not to mention that just about every model is designed to basically validate everything you say

          Except they’re not. LLMs are not that smart. They frequently end up doing that but they aren’t designed to do it. They only guess the next word in a sentence, then guess the word after that, etc. So if its been fed conspiracy garbage as training data, some of the most probable words or terms in the next sentence will be similar conspiracy garbage words and phrases.

          So they aren’t designed to do conspiracy stuff, they’re just given training data that contains that (along with lots of other unrelated subjects and sources).

          and make up whatever facts it wants to support it.

          That’s a big part of the “generative” of “generative AI”. Generative AI is LLMs and AI image generation models. They are made to create something that didn’t exist before.

          • Stovetop@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            4 days ago

            What I mean is that the popular LLMs can be fed however much training data is possible to cram in there, but a model like ChatGPT will typically defer to you if you tell it that it’s wrong or that you’re right. If you present yourself as a meteorological expert and then tell it that they sky is red, for example, it’ll agree without much protest.

            These models are all built to act like assistants, so the ultimate goal is to make sure the user feels validated and satisfied with the results they provide. It’s not that they’re designed to do conspiracy stuff, but they will gladly reinforce any paranoias/disinformation when challenged, or simply if they are pushed to do so.

    • kautau@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      5 days ago

      Im guessing you mean “should not have” and “should not moving forward.” Because the models were most certainly trained on Reddit. If that’s the case, agreed

  • jaybone@lemmy.zip
    link
    fedilink
    English
    arrow-up
    14
    ·
    4 days ago

    Netscape, yahoo, EarthLink.

    This guy couldn’t update his resume in 21 years?

    Yeah there’s something else going on there.

    • jdf038@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      4 days ago

      Yeah mental illness for sure especially considering the bad behavior like DUIs and possibly the divorce.

      It would be nice if we didnt tie our lives to work so much and be got the help needed

  • quick_snail@feddit.nl
    link
    fedilink
    arrow-up
    7
    ·
    4 days ago

    AI chatbots have a tendency to be sycophantic, which is a recipe for disaster when people lose touch with reality.

    sycophant (adj.) A servile self-seeking flatterer

  • sunzu2@thebrainbin.org
    link
    fedilink
    arrow-up
    4
    ·
    4 days ago

    Boomers… Jfc

    I am sure “AI” didn’t help but this life arc and behaviour just reeks of boomerism

    But this is just another shill op by the AI parasites to stay relevant.

    • AstralPath@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      4 days ago

      This is not boomerism. A kid committed suicide because ChatGPT encouraged him to do it in a very similar way to how it enabled this guy’s conspiracies.

      ChatGPT is the ultimate enabler.

  • quick_snail@feddit.nl
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    4 days ago

    "I’m sorry to hear that the whole town is out to get you. That must be very difficult. Fortunately, there are some concrete actions you can make to stop their conspiricy agaisnt you and also end the pain that you’re in.

    Would you like me to help you write a suicide note?"