Several suicides have been blamed on AI. This appears to be the first homicide.

  • whiwake@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    8
    ·
    2 days ago

    Soelberg killed his mother and then himself after suffering from untreated mental illness…

    It goes on about how ChatGPT made it worse, but psychosis is psychosis

      • whiwake@lemmy.cafe
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        Yes, Helter skelter is a good example. Thanks. Eventually some other trigger would have come around. Another album? A movie? The pot was going to boil over at some point. And while that song does get a lot of credit for Charles Manson, it would be ridiculous to enforce a rule saying it cannot be played anywhere because turns people crazy… Based on a very small group of affected people – who are already crazy

        • nymnympseudonym@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          2 days ago

          yup

          To reduce stuff like this we could fund a lot of social workers and public-benefit psychiatry

          Doing so in the USA for a solid decade would probably cost less than one free plane

          • whiwake@lemmy.cafe
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Unfortunately, politicians would never allow mental healthcare in the United States. Most of them would never get elected if everyone was not suffering some kind of psychosis

  • NutWrench@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 days ago

    AIs aren’t capable of figuring out the ethics of what you ask them. They just tell you what they think you want to hear.

    “I’m thinking of doing (obviously horrible thing) because it will make me feel better.”

    AI: “Well, that sounds like a wonderful idea.”

    “But if I do (obviously horrible thing) horrible consequences will happen.” (explaining that the thing is BAD)

    AI: “Well, you clearly can’t do THAT, can you?”